Johan Lövgren

Johan is an extremely dynamic and energetic person who brings drive and competence to the team. As an entrepreneur since several years back he has shown evidence for being capable of delivering great results within record time and with a small budget. His results has successfully been released to the market and resulted in known products.

His broad competence ranges from design aspects to electronics developments and embedded coding. This is a result from his personality as well as his MSc. degree from Chalmers University of Technology.

Andreas Angervik

Andreas is a person with a great feeling for humor combined with great skills in Java and backend development technologies. He has during his time with Vinnter shown evidence for being capable of both delivering great results and at the same time help the rest of us to grow in competence through his knowledge sharing mentality. 

He is, and will become even more so, one of our AWS cloud experts.

The Evolution of Micro Processing Units (MPUs)

Microprocessors or micro processing units (MPUs) are nothing short of amazing. By integrating the complete computation engine onto one electronic component, the computing power that once required a room full of equipment can now be fabricated on a single chip, usually about the size of a fingernail, though can indeed be much smaller still.

Serving as the central processing unit (CPU) in computers, microprocessors contain thousands of electronic components and use a collection of machine instructions to perform mathematical operations and move data from one memory location to another. They contain an address bus that sends addresses to memory, read (RD) and write (WR) lines to tell the memory whether it wants to set or get the address location, and a data bus that sends and receives data to and from memory. Micro processing units also include a clock line that enables a clock pulse to sequence the processor, and a reset line that resets the program counter and restarts execution.

(Basic micro processing unit. Image source:

The microprocessor is at the very core of every computer, be it a PC, laptop, server, or mobile device, serving as the instrument’s “brain”. They are also found in home devices, such as TVs, DVD players, microwaves, ovens, washing machines, stereo systems, alarm clocks, and home lighting systems. Industrial items contain micro processing units, too, including cars, boats, planes, manufacturing equipment and machinery, gasoline pumps, credit card processing units, traffic control devices, elevators, and security systems. In fact, pretty much everything we do today depends on microprocessors – and they are, of course, a fundamental component of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices which are becoming more and more prevalent in homes and crucial to businesses all over the globe.

It’s safe to say that these tiny pieces of equipment have had – and will continue to have – an enormous influence technologically, economically, and culturally. But where did micro processing units first originate, and what can we expect from them in the future?

A Brief History of Micro Processing Units

The very first commercially available micro processing unit was the Intel 4004, released by Intel Corporation way back in 1971. The 4004 was not very powerful, however, and not very fast – all it could do was add and subtract, and only 4 bits at a time. Even so, it delivered the same computing power as the first electronic computer built in 1946 – which filled an entire room – so it was still impressive (revolutionary, in fact) that everything was on one tiny chip. Engineers could purchase the Intel 4004 and then customize it with software to perform various simple functions in a wide variety of electronic devices.

(The Intel 4004. Image source:

The following year, Intel released the 8008, soon followed by the Intel 8080 in 1974 – both 8-bit microprocessors. The 8080 was commercially popular, and could represent signed numbers ranging from -128 to +127 – an improvement over the 4004’s -8 to +7 range, though still not particularly powerful, and so the 8080 was only used for control applications. Other micro processing units, such as the 6800 from Motorola and the z-80 from Zilog were also popular at this time.

The third generation of 16-bit micro processing units came between 1979 and 1980, and included the 8088, 80186 and 80286 from Intel, and the Motorola 6800 and 68010. The speeds of these microprocessors were four times faster than their second-generation predecessors.

(Table of various microprocessors Intel has introduced over the years. Image source:

The fourth generation of 32-bit microprocessors were developed between 1981 and 1995. With 32-bit word size, these processors became very popular indeed as the CPU in computers. In 1993, following a court ruling two years earlier which prevented Intel from trademarking “386” as the name of its then most powerful processor, the company released the 80586 by the name Intel P entium, opening a new era in consumer microprocessor marketing. No longer were processors referred to solely by numbers, but instead carried a brand name, and the trademarked “Pentium” soon became something of a status symbol amongst computer owners.

The fifth generation arrived in 1995 with high-performance and high-speed 64-bit processors. As well as new versions of Pentium, over the years these have included Celeron, Dual, and Quad-core processors from Intel, and many more from other developers including Motorola, IBM, Hewlett Packard and AMD. See Computer Hope’s “Computer Processor History” for an extended list of computer processors over the years, or the Wikipedia entry “Microprocessor Chronology”.

The Future of Microprocessors

As time and technology advance, microprocessors get increasingly powerful. Today, nearly all processors are multi-core, which improves performance while reducing power consumption. A multi-core processor works in exactly the same way as two or more single microprocessors. However, as a multi-core processor only uses one socket within the system, there is a much faster connection between the processor and the computer. Intel remains the strongest competitor in the microprocessor market today, followed by AMD.

Micro processing units have also gotten smaller and smaller over the years as well. In the 1960s, computer scientist and Intel Co-Founder Gordon Moore made an interesting observation – that every twelve months, engineers were able to double the number of transistors on a square inch piece of silicon. This held true for about ten years, then in 1975, Moore revised his forecast for the next decade to a doubling every 24 months – which indeed proved to be more or less accurate until around 2012.

(Image source:

However, we’re now starting to reach the physical limits for how small transistors can get. Up until recently, the industry standard was 14 nanometres (1nm = one-billionth of a metre). Then came Apple’s Bionic processor – which powers the iPhone XR, XS, and XS Max – measuring in at 7nm. Since then, IBM has been experimenting with 5nm chips, and researchers at MIT and the University of Colorado have developed transistors that measure a record-setting 2.5 nm wide.  

However, Moore’s Law cannot continue ad infinitum for a number of reasons. For starters, we must consider the threshold voltage– i.e. the voltage at which the transistor allows current to pass through. The problem lies in the imperfection of a transistor behaving as a switch, as it leaks some small amount of current when turned off. The situation worsens as the density of transistors on a chip increase. This puts a grave toll on transistor performance which can only be maintained by increasing the threshold voltage. As such, though transistor density may be increased, it would provide comparatively little improvement in speeds and energy consumption, meaning the operations performed on new chips would take, more or less, the same time as the chips used today – unless a better architecture is implemented to solve the problem.

Due to such limitations placed on microprocessors today, researchers are exploring new solutions and new materials in place of silicon – such as gallium oxide, hafnium diselenide, and graphene – to keep building the performance of microprocessors.

Single-Board Microcomputers

As central processing units shrink, so too do computers themselves. Over the last 60 or so years, the computer has evolved from a machine that filled an entire room to a device that can fit neatly in your pocket. And just as electronics have shrunk, so too has the price.

Today, for just a handful of dollars consumers can purchase single-board microcomputers – about the size of a credit card – with some pretty impressive communicative options, multimedia capabilities and processing power. One of the machines at the vanguard of this low-cost, high-power, small-size computing revolution is the Raspberry Pi, launched by the Raspberry Pi Foundation in 2012 as a $35 board to promote teaching of basic computer science in schools and developing countries. 

Raspberry Pi Family Photo. (Image source:

The original Raspberry Pi – the Pi 1 Model B – had a single-core 700MHz CPU with 256MB RAM. There have been several iterations and variations since that initial release, however, with the latest model – the Pi 4 Model B – boasting a quad core 1.5GHz CPU with up to 4GB RAM. All models can be transformed into fully-working computers with just a little bit of modest tinkering plus your own keyboard, mouse and monitor – users have even had success using a Raspberry Pi as a desktop PC for regular office work, including web browsing, word processing, spreadsheets, emailing, and photo editing. All for under $55. 

(Imager source:

Of course, Raspberry Pi has competitors – most notably Arduino, a company that produces single-board microcontrollers (using a variety of microprocessors) that can be used to design and build devices that interact with the real world. Both Raspberry Pi and Arduino devices are widely used in development and prototyping projects, particularly for IoT devices. They are, however, much in the domain of the hobbyist – craftsmen and women trying their hand at creating useful everyday tools such as remote controls for garage doors and thermometer devices, as well as more fun projects like gaming consoles, robots and drones. There are also various other proprietary development boards available from companies such as ST Microelectronics and Texas Instruments.

While all of these development kits are good for prototyping, they are less suitable for mass production. Here at Vinnter, we use both Raspberry Pi and Arduino devices in development projects where embedded systems need to be adopted. The challenge, however, comes when we move from the prototype to an industrialized product development project for two main reasons – cost and size.

Though Raspberry Pi and Arduino units are relatively cheap for the consumer market, up to $35 a throw is simply not viable when planning for 10,000 or even 100,000 units to be produced for inclusion in other products. The other problem that comes from a development kit like Arduino or Raspberry Pi is the large size. True, these devices are impressively small for what they are – but because they include additional functions and features (such as USB connectors, ethernet connectors, HDMI connectors, etc.) that most likely won’t be required for the product being developed, they are simply too big and impractical for most real-world applications. For example, if you were developing a smart watch, then a credit-card-sized device isn’t practical at all. In addition, unnecessary functions increase power consumption – which is an important consideration, especially for battery-powered products.

Final Thoughts

Microprocessors have come a long way since the humble Intel 4004. Now capable of controlling everything from small devices such as watches and mobile phones to large computers and even satellites, it’s safe to say that with their low-cost availability, small size, low power consumption, high versatility and high reliability, the microprocessor is one of the most important and significant inventions responsible for powering our modern world.

When developing microprocessor-based commercial products or new technology products for businesses, however, it is essential to design a custom microprocessor board that’s fit for purpose. Though many very capable individuals at organizations can have much success prototyping new products using single-board microcomputer devices such as the Raspberry Pi, when it comes to large-scale production, the project will eventually have to be migrated to a production design. As such, companies currently prototyping new products will find working with third parties that have the resources and expertise to take their microprocessor-based developments and ideas to the next stage invaluable.

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and secure IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, creating new, highly-secure connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more.

The Value of Robotic Automation Lies in Reskilling the Workforce

The first robot entered the workplace in 1959, at an automotive die casting plant in Trenton New Jersey. Since then, many other industries such as electrical/electronics, rubber and plastics, pharma, cosmetics, food and beverage, and the metal and machinery industry have accelerated their adoption of robotic automation. By 2017, there were over two million operational robots across the world, with the number projected to almost double to 3.8 million units by 2021. Today, global robot density (number of robot units per 10,000 employees) in manufacturing industries stands at 74, up from 66 units in 2015. 

(Image source:

The Economic Impact of Robots

A June 2019 How Robots Change The World report from Oxford Economics estimates that a 1% increase in stock of robots could boost output per worker by 0.1% across the entire workforce. The study also projects a 5.3% increase in global GDP, equivalent to about $5 trillion, if robot installations were increased 30% above the baseline forecasts for 2030. 

There have been several studies over the years that have established the superlative impact of robotic automation on productivity, competitiveness and economic growth. There are also studied arguments about how robotic automation enables businesses to reshore jobs, increases demand for higher-skilled workers, addresses rising labor scarcity, and creates new job opportunities that do not even exist today.  

The Social Impact of Robots

But all that opportunity is not without its challenges. For instance, the Oxford Economics study found that, on average, each newly installed robot displaces 1.6 manufacturing workers. This means that up to 20 million manufacturing jobs could be at risk of displacement by 2030. 

It is also necessary to acknowledge that robots are no longer a purely manufacturing phenomenon. Though the automotive sector pioneered and continues to pursue the deployment of robots, today, many other manufacturing sectors, including electrical/electronics, rubber and plastics, pharmaceutical and cosmetics, food and beverage, metal and machinery etc., are investing heavily in robotic automation. And the same is true outside of manufacturing, with retail and ecommerce, sales and marketing, customer service, IT and cybersecurity, and many more sectors and segments besides all deploying robotic automation and artificial intelligence (AI) software to enhance business intelligence and customer experiences. The market for professional services robots is also expected to grow at an average rate of 20-25% between 2018 and 2020. The entire field of robotics is advancing much faster today thanks to falling sensor prices, open source development, rapid prototyping and digital era constructs such as Robotics-as-a-Service and AI. 

The Long and the Short of It Is…

(Image source:

… The robots are coming in practically every industry. But should we really all be fearful for our jobs? It is indeed the widely-held view. According to McKinsey, there’s widespread belief around the world that robots and computers will do much of the work currently done by humans in the next 50 years. 

Research from the World Economic Forum (WEF) agrees that millions of jobs are likely to be displaced by automation, but we have less to fear from robots than many seem to think – at least in the short term. Though the Swiss think tank predicts that robots will displace 75 million jobs globally by 2022, 133 million new ones will be created – a net positive. 

The report notes four specific technological advances as the key drivers of change over the coming years. These are: ubiquitous high-speed mobile internet, artificial intelligence, widespread adoption of big data analytics, and cloud technology. “By 2022, according to the stated investment intentions of companies surveyed for this report, 85% of respondents are likely or very likely to have expanded their adoption of user and entity big data analytics,” write the report’s authors. “Similarly, large proportions of companies are likely or very likely to have expanded their adoption of technologies such as the Internet of Things and app- and web- enabled markets, and to make extensive use of cloud computing. Machine learning and augmented and virtual reality are poised to likewise receive considerable business investment.”

The Reskilling Revolution 

WEF finds that nearly 50% of companies expect that automation will lead to some reduction in their full-time workforce by 2022, based on the job profiles of their employee base today. However, nearly a quarter expect automation to lead to the creation of new roles in the enterprise, and 38% of the businesses surveyed expect to extend their workforce to new productivity-enhancing roles. And this, indeed, is key to the robotic revolution and why so many companies are committed to investing in new technologies in the first place – because robotics, automation, machine learning, cloud computing, and big data analytics can enhance the productivity of the current workforce in the new digital economy and improve business performance.

In industries like manufacturing, these technologies provide seamless connections across production and distribution chains, streamlining the process of getting products from the assembly line into the hands of the customer. But it’s not just manufacturing – everything from healthcare to retail will benefit from these emerging and maturing technologies. And it’s not necessarily the case that robots and algorithms will replace the current workforce, either – rather, WEF says, they will “vastly improve” the productivity of existing jobs and lead to many new ones in the coming years. 

In the near future, it is expected that workers will be doing less physical work (as more and more of it is handled by robots), but also less information collecting and data processing (as these, too, will be automated), freeing up workers for new tasks. Though there will be more automatic real-time data feeds and data monitoring that won’t require workers to enter and analyze it, there will also be more work on the other end of spectrum, where real humans spend time making decisions based on the data collected, managing others, and applying expertise. Indeed, automation is more likely to augment the human workforce than replace it. 

The ability to digitize information and data is stimulating complete redesigns of end-to-end processes, customer experience strategies, and creating more efficient operations. Data analytics, indeed, is a key part of realizing the potential of all next generation technology – including robotics and automation – to enable better real-time reaction to trends and what customers want. 

Though there will inevitably be a decline in some roles as certain tasks within them become automated or redundant, in their place emerges a demand for new roles – though this does mean that the existing workforce will need to be retrained to update their skills. WEF says that among the range of roles that are set to experience increasing demand are software and applications developers, data analysts and scientists, and ecommerce and social media specialists – roles, the authors say, that are significantly based on and enhanced by the use of technology. Also expected to grow are roles that leverage distinctively “human” skills – those in customer service, sales and marketing, training and development, people and culture, organizational development, and innovation management. There will also be accelerating demand for wholly new specialist roles related to understanding and leveraging the latest emerging technologies – AI and machine learning specialists, big data specialists, process automation experts, security analysts, user experience and human-machine interaction designers, and robotics engineers.   

In short, the robotics revolution will spur a reskilling revolution – and businesses already seem to be on board with this idea. 66% of respondents in a McKinsey study assigned top-ten priority to addressing automation/digitization-related skill gaps. 

(Image source:

Final Thoughts 

As we head towards 2020, robotics, automation and related technologies are becoming a prerequisite for any company that wishes to remain competitive. Businesses large and small are embracing automation technologies – from fully-fledged assembly line robots to customized call center chatbots – to help simplify business processes, improve productivity and deliver better customer experiences at scale. This trend is only going to accelerate in the future, and though the rise of the robots may be a cause of concern for many in the labor market, the reality is that organizations won’t be able to solve all of their problems with automation alone. Rather, the robots are coming to augment the human workforce, not replace it. Job roles may change, and new skills may be required, but unless companies of all sizes start automating their processes, they will soon find themselves gobbled up by those that do.  

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, and create new connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more. 

Using design thinking when developing IoT solutions

A McKinsey analysis of over 150 use cases estimated that IoT could have an annual economic impact in the $3.9 trillion to $11.1 trillion range by 2025. At the top end, that’s a value equivalent to 11% of the global economy. We believe using design thinking when developing IoT solutions will help us reach financial targets.


It is not that difficult to envisage that degree of value-add given the continually evolving technical and strategic potential of IoT; there are new standards, components, platforms, protocols, etc. emerging almost daily. It is now possible to combine these options in seemingly endless ways, to address different use case requirements of connectivity, bandwidth, power consumption, user interaction, etc. to suit almost every potential application and user need out there. 

There will, however, be several technical, regulatory and human resources challenges that will have to be addressed before we can extract the real value of IoT. But perhaps the biggest challenge will lie in the approach that IoT companies take to identifying user needs and developing solutions that represent real value. 

Every technology cycle, from the dot com boom to the current AI gold rush, produces its own set of quirky, weird and downright pointless applications. And IoT is no different, with many products boasting connectivity features that may qualify them for a “smart” tag but offer no real benefits whatsoever. Every premier industry event like CES is followed by a slew of news roundups describing bewilderingly absurd “smart” solutions, from smart dental floss to $8,000 voice-activated toilets. 

But we believe that IoT’s true potential and value will only emerge when the focus is squarely on leveraging the power of IoT to address what are known as wicked problems. 

The concept of wicked problems was first defined by design theorist Horst Rittel in the context of social planning in the mid 1960s. It refers to complex issues, characterized by multiple interdependent variables and disparate perspectives, which seem impossible to solve. These problems do not necessarily lend themselves to traditional linear problem-solving processes and methodologies and require a new approach that can handle the inherent ambiguity and complexity of these issues. It was design theorist and academic Richard Buchanan who, in 1992, referenced design thinking as the innovation required to tackle wicked problems. 

Notwithstanding smart litter boxes that can text and smart garbage cans that automate shopping lists, the focal point of IoT has to be on identifying and addressing intractable problems and design thinking is the approach that will enable the IoT industry to do just that.  

Design thinking – A brief history

For many in the industry, design thinking is almost inextricably linked to Tim Brown and IDEO, and both played an important role in mainstreaming both the term and the practice. But as IDEO helpfully clarifies on its website, though they are often credited with inventing the term, design thinking has roots in a global conversation that has been unfolding for decades.

To understand how that conversation unfolded, we turn to Nigel Cross, Emeritus Professor of Design Studies at The Open University, UK, and his 2001 paper Designerly Ways Of Knowing: Design Discipline Versus Design Science. The paper traces the roots of what would eventually evolve into design thinking to the 1920s, and the first modern design movement. According to Cross, the aspiration was to “scientise” design and produce works of art and design that adhered to key scientific values such as objectivity and rationality. 

These aspirations surfaced again, in the 1960s, but the focus had evolved considerably. If formerly the emphasis was on scientific design products, the design methods movement of the 60s focused on the scientific design process and design methodology emerged as a valid subject of inquiry. The decade was capped by cognitive scientist and Nobel Prize laureate Herbert Simon’s 1969 book, Sciences of the Artificial, which refers to techniques such as rapid prototyping and testing through observation that are part of the design thinking process today.  

This “design science decade” laid the groundwork for experts from various fields to examine their own design processes and contribute ideas that would move the aspiration to scientise design along.  IDEO came along in the early 90s with a design process, modeled on the work developed at the Stanford Design School, that even non-designers could wrap their head around, thus providing the impetus to take design thinking mainstream. By 2005, Stanford had launched its own course on design thinking. Today, there are several leading educational institutions offering design thinking courses and a whole range of non-design businesses that rely on design thinking to resolve some of their wickedest problems. 

So, what is design thinking?  

Let’s start with a slice of history again. 

In the 80s, Bryan Lawson, professor at the School of Architecture of the University of Sheffield, United Kingdom, conducted an empirical study to understand how the approach to problem-solving varies between scientists and designers. The study revealed that scientists used problem-focused strategies as opposed to designers who employed solution-focused strategies. Scientists solve by analysis whereas designers solve by synthesis. 

A problem-focused approach relies on identifying and defining all parameters of a problem in order to create a solution. Solution-focused thinking, on the other hand, starts with a goal, say an improved future result, rather than focusing only on resolving the problem. 

Design thinking is a solution-focused methodology that enables the creative resolution of problems and creation of solutions, with the intent of an improved future result. It’s an approach that values analysis as well as synthesis. It is an integrated cognitive approach that combines divergent thinking, the art of creating choices, with convergent thinking, the science of making choices. Design thinking provides non-designers with elements from the designer’s toolkit that allows them to take a solution-focused approach to problem-solving. 


IDEO’s definition of design thinking as a human-centered approach also includes what is often referred to as the three lenses of innovation; desirability, feasibility and viability. Human-centred design always begins by establishing desirability, defining what people want. The next stage is to establish if it is technically feasible to deliver what people want. And finally, even a desired and technically feasible solution must be commercially viable for a business. Design thinking, then, is a process that delivers innovative solutions that are optimally positioned at the overlap between desirability, feasibility and viability.  

This framework should be the ideal starting point for product development in the IoT industry. Today, a lot of solutions seem to take desirability and viability for granted just because it is technically feasible to embed almost anything with connectivity. But is this the right approach to IoT innovation?  

The 5-stage design thinking model 

The design thinking process guide from the Hasso-Plattner Institute of Design at Stanford ( prescribes a 5-stage model that progresses as follows:


EMPATHIZE: Empathy is a critical component of the human-centred design process as it rarely if ever begins with preconceived ideas, assumptions and hypotheses. This stage allows enterprise teams to better understand the people that they are designing for; understand their needs, values, belief systems and their lived experience. As the process guide puts it, the best solutions come out of the best insights into human behavior. Design thinking encourages practitioners to observe how people interact with their environment in the context of the design challenge at hand. Designers should also directly engage with end users, not in the form of a structured interview but as a loosely bounded conversation. Both these approaches can throw up insights that may not necessarily be captured by historical data or expert opinions. 

DEFINE: This stage of more about defining the design challenge from the perspective of collected end user insights rather need defining a solution. The “define’ stage enables the synthesis of vast amounts of data, collected in the previous stage, into insights that can help focus the design challenge. At the end of this stage, it must be possible to articulate an actionable problem statement that will inform the rest of the process. 

IDEATE: The purpose of ideation is not to hone in on a right idea but generate the broadest range of possible ideas that are relevant to the design challenge. Finding the right idea will happen in the user testing and feedback stage. In the meantime, use as many ideation techniques as possible to move beyond the obvious into the potentially innovative. Most important of all, defer judgement as evaluating ideas as they flow can curb imagination, creativity and intuition. At the end of the ideation process, define quality voting criteria to move multiple ideas into the prototyping stage. 

PROTOTYPE: Build low-resolution (cheap and quick) prototypes as it means that more prospective ideas can be tested. Use these prototypes to elicit feedback from users and the team that can then be looped back into refining these solutions across multiple iterations. A productive prototype is one that communicates the concept of the proposed solution, stimulates conversation and allows for the quick and cheap failure of unworkable ideas. 

TEST: Prototyping and testing often work as two halves of the same phase rather than as two distinct phases. In fact, the prototype design will have to reflect the key elements that must be tested and even how they will have to be tested. Testing need not necessarily focus only on users’ feedback to the presented prototype. In fact, this stage can sometimes generate new insights as people interact with the prototype. Rather than telling users how to use the prototype, allow them to interact freely and compare different prototypes. 

And finally there is iterate. This is not so much a stage as a golden rule of design thinking. The point of design thinking is to create a repetitive learning loop that allows teams to refine and refocus ideas or even change directions entirely. 

Of course, the Stanford model is not the only design thinking framework in circulation today. Those interested in more options can find an introductory compilation at 10 Models for Design Thinking. Though these frameworks may vary in nomenclature and process structure, some central design thinking concepts such as empathy and iteration remain common to most.

Is design thinking effective? 

According to one source, only 24% of design thinking users measure the impact of their programs. Even a survey from Stanford’s found that organizations struggled to determine ROI.  

However, in an excellent article in the Harvard Business Review, Jeanne Liedtka, professor of business administration at the University of Virginia’s Darden School of Business, concludes, after a seven-year 50-project cross-sectoral qualitative study that “design thinking has the potential to do for innovation exactly what TQM did for manufacturing: unleash people’s full creative energies, win their commitment and radically improve processes.

A more quantitative study by Forrester on The Total Economic Impact Of IBM’s Design Thinking Practice provides a litany of quantified benefits that includes the realization of $20.6 million in total value due to a design thinking-led reduction in design, development and maintenance costs.  

But the limited availability of quantitative data has been offset by the steady stream of success stories of world-leading companies transforming elements of their business with design thinking. 

Design thinking offers the framework that, at a fundamental level, will enable the IoT industry to reorient itself away from a “what can I connect next to the internet” mindset to a “where do users need help the most” approach. Its human-centric empathy-driven approach enables businesses to identify and understand potential contexts and problems from the perspective of the end-user rather than from the point of view of the possibilities afforded by technology. Companies can now use the three lenses of innovation to evaluate the practical, technical and commercial value of the solutions that they plan to deploy. And finally, the inclusive and iterative design process will ensure a much higher probability of success while enabling real value for customers. 

Access Control & Iot Security: Challenges And Opportunities

IoT, the new attack vector

IoT attacks increased by over 217% in 2018. But a report with the provocative title of IoT CyberattacksAre The Norm, The Security Mindset Isn’t found that only 7% of organizations consider themselves equipped to tackle IoT security challenges. If that sounds wanting, consider this: 82% of organizations that develop IoT devices are concerned that the devices are not adequately secured from a cyberattack. Another study found that only 43% of enterprise IoT implementations prioritize security during the development/deployment process and only 38% involve security decision-makers in the process. Access control is considered being the first line of defence when it comes to IoT security.

Now, those broad trend indicators can possibly apply to any nascent technology. But there are two factors that make the IoT scenario particularly precarious. The first is the fact that, by all indications, the IoT is emerging as a potentially preferred attack vector for launching botnet assaults or even infiltrating enterprise networks. The second is that thus far, the IoT industry, from device developers to enterprise IT organizations, seems oblivious or ill-equipped to even secure access control and authentication, one of the fundamental components of any technology security strategy. 

Key IoT security challenges

However, an objective analysis of the scenario cannot but mention some of the unique characteristics of IoT networks that make security much more of a challenge than with other technology environments.  

First off, there’s the attack surface. An estimated 20 billion devices will be connected to the IoT by 2020, that’s 20 billion potential endpoint targets for malicious intent. A lot of these devices will be deployed in areas where it may be impossible or impractical to provide physical security, which makes it easier for bad actors to physically compromise devices on the network. Apart from the physical device, each IoT system comprises multiple edges and tiers including mobile applications, cloud and network interfaces, backend APIs, etc. Each one of these elements represents a potential vulnerability and just one unsecured component can be leveraged to compromise the entire network.  

Second there’s the sheer heterogeneity of IoT networks, with a range of different hardware and software stacks, governed by different access-control frameworks and with varying levels of privileged access. This means that there is no one size-fits-all approach to security and IoT security strategy will have to be designed around the characteristics of participating entities on each network. 

And finally, most IoT devices have limited power, storage, bandwidth and computational capabilities. So conventional security methods that are effective in other computing systems will be too complex to run on these constrained IoT devices. 

Device visibility precedes access control 

It is this distributed nature of IoT, where large volumes of devices communicate autonomously across multiple standards and protocols, that makes security more complex than it is in other more monolithic computing environments. That’s also why the IoT industry will need to reimagine conventional access control and authentication models and protocols and purpose them for this new paradigm. The right access control and authentication frameworks enables companies to identify IoT devices, isolate compromised nodes, ensure the integrity of data, and authenticate users and authorize different levels of data access. 

Since access control is the first point of contact between a device and the IoT network, these technologies must be able to recognize these devices in order to determine the next course of action. IoT devices have to be visible before access control and authentication can kick in and do its job. But most enterprises currently do not fare very well on the IoT device visibility score; a mere 5% keep an inventory of all managed IoT devices and only 8% have the capability to scan for IoT devices in real-time. But 46% are making it a priority in 2019 to enhance IoT discovery, isolation and access control, and that provides the starting point for a discussion on the merits of the different access control models available today. 

There are several types of access control models that can be considered for different IoT scenarios; from the basic ACL (Access Control List) model to the slightly more advanced MAC (Mandatory Access Control) model used primarily in military applications to the still-evolving and sophisticated Trust Attribute-Based Access Control model that builds on the ABAC (Attribute-Based Access Control) model to address requirement specific to IoT. 

Types of access control and authentication models 

But for the purposes of this article, we shall focus on more mainstream models that include RBAC (Role-Based Access Control), ABAC, CapBAC (Capability-Based Access Control) and UCON (Usage Control) model. 

RBAC: As the name suggests, this model manages resource access based on a hierarchy of permissions and rights assigned to specific roles. It allows multiple users to be grouped into roles that need access to the same resources. This approach can be useful in terms of limiting the number of access policies but may not be suitable for complex and dynamic IoT scenarios.  However, it is possible to extend RBAC to address fine-grained access control requirements of IoT though this could result in “role explosion” and create an administrative nightmare. 

The OrBAC (Organizational-Based Access Control) model was created to address issues related to RBAC and to make it more flexible. This model introduced new abstraction levels and the capability to include different contextual data such as historic, spatial and temporal data. There has also been a more recent evolution along this same trajectory with Smart OrBAC, a model designed for IoT environments that offers context-aware access control. 

ABAC: In this model, the emphasis shifts from roles to attributes on the consideration that access control may not always have to be determined by just identity and roles. Access requests in ABAC are evaluated against a range of attributes that define the user, the resource, the action, the context and the environment. This approach affords more dynamic access control capabilities as user access and the actions they can perform can change in real-time based on changes in the contextual attributes.  

ABAC provides more fine-grained and contextual access control that is more suited for IoT environments than the previous RBAC. It enables administrators to choose the best combination of a range of variables to build a robust and comprehensive set of access rules and policies. In fact, they can apply access control policy even without any prior knowledge of specific subjects by using data points that are more effective at indicating identity. The biggest challenge in this model could be to define a set of attributes that is acceptable across the board. 

CapBAC: Both RBAC and ABAC are models that use a centralized approach for access control, as in all authentication requests are processed by a central authority. Though these models have been applied in IoT-specific scenarios, achieving end-to-end security using a centralized architecture on a distributed system such as the IoT can be quite challenging. 

The CapBAC model is based on a distributed approach where “things” are able to make authorization decisions without having to defer to a centralized authority. This approach accounts for the unique characteristics of the IoT such as large volume of devices and limited device-level resources. Local environmental conditions are also a key consideration driving authorization decisions in this model, thus enabling context-aware access control that is critical to IoT. 

The capability, in this case, refers to a communicable, unforgeable token of authority that uniquely references an object as well as an associated set of access rights or privileges. Any process with the right key is granted the capability to interact with the referenced object as per the defined access rights. The biggest advantage of this model is that distributed devices do not have to manage complex sets of policies or carry out elaborate authentication protocols which makes it ideal for resource constrained IoT devices.

UCON: This an evolution of the traditional RBAC and ABAC models that introduces more flexibility in handling authorizations. In the traditional models, subject and object attributes can be changed either before the authorization request begins or after it is completed, but not when the subject has been granted permission to interact with an object. 

The UCON model introduces the concept of mutable attributes as well as two new decision factors, namely obligations and conditions, to go with authorizations. Mutable attributes are subject, object or contextual features that change their value as a consequence of usage of an object. By enabling continuous policy evaluation even when access is ongoing, UCON makes it possible to intervene as soon as a change in attribute value renders the execution right invalid.


Apart from these mainstream models, there are also several models, such as Extensible Access Control Markup Language (XACML), OAuth, and User-Managed Access (UMA) that are being studied for their applicability to IoT environments. But it is fair to say that the pace of development of IoT-specific access control models is seriously lagging development efforts in other areas such as connectivity options, standards and protocols. 

The other worrying aspect of the situation is that enterprise efforts to address IoT security concerns do not show the same urgency as those driving IoT deployments. All this even after a large scale malware attack in 2016 hijacked over 600,000 IoT devices using just around 60 default device credentials. A robust access control and authentication solution should help thwart an attack of that intensity. But then again, access control is just one component, a critical one nevertheless, of an integrated IoT security strategy. The emphasis has to be on security by design, though hardware, software and application development, rather than as an afterthought. And that has to happen immediately considering that the biggest IoT vulnerability according to the most recent top 10 list from the Open Web Application Security Project is Weak, Guessable, Or Hardcoded Passwords.  

From Smart To Helpful – The Next Generation Connected Home

“No one asked for smartness, for the smart home.” That’s the head of Google’s smart home products explaining the company’s decision to focus on delivering a helpful home that provides actual benefits rather than a smart home that showcases technology. This is key; the next generation connected home must provide convenience and actual benefit.


Smart home, helpful home, what’s in a name when the industry is growing at a CAGR of almost 15% and is expected to more than double in value, from USD 24.10 billion in 2016 to USD 53.45 billion in 2022. Growing acceptance of connected home devices powered global shipments to over 168 million in the first quarter of 2019, up 37.3% from the previous year. IDC estimates that shipments will continue to grow at almost a 15% CAGR, from 840.7 million units in end 2019 to 1.46 billion units by 2023. 


There are a lot of factors fueling the increasing acceptance of connected home devices. A majority of consumers expect their next home to be connected and are willing to pay more for a helpful home. Though this trend may be spearheaded by digital-savvy millennials and their penchant for tech innovations, the convenience afforded by these smart solutions is drawing in the older generations as well. Fairly recent innovations like voice-enabled interfaces are simplifying the adoption process for a larger proportion of consumers. At the same time, increasing competition and falling device prices, rising interest in green homes and sustainable living, have all, to varying degrees, helped convert consumer interest into action. 

But of course, there has to be underlying value to all these trends and preferences. 

Key value drivers in smart home systems

There are broadly three layers of value in a smart home system. The first is the convenience of anytime-anywhere accessibility and control, where consumers can change the state of their devices, such as lock door, turn off lights, etc., even remotely, through a simple voice or app interface. 

The second layer enables consumers to monitor and manage the performance of these systems based on the data they generate. For instance, consumers can manage their energy consumption based on the smart meter data or create a fine-grained zone-based temperature control using smart thermostats to control costs. 

The final layer is automation, which is the logic layer that enables consumers to fine tune and automate the entire system based on their individual needs and preferences. 

Till date, there have been some empirical quantifications of value in terms of how a lot of smart homeowners in the US save 30 minutes a day and $1,180 every year or how smart thermostats can cut temperature control costs by 20%. However, it is possible, at least theoretically, to link adoption to value as smart home segments such as energy and security management with, tangible value propositions of cost savings and safety, have traditionally experienced higher rates of adoption. 

But as the smart home markets evolves beyond the hype and adoption cycle, the dynamics of value are changing. And Google’s pivot from smart to helpful reflects this shift in the connected home market. It is no longer about the technology but about the value it can deliver.    

The future value of smart home technologies

Customers get smart home tech. Most consumers in the US, a key market for this emerging technology, most people already use at least on smart home device. According to one report, US broadband households now own more than 10 connected devices with purchase intention only getting stronger through the years. The global average for smart home devices per household is forecast to be 16.53, up from the current 5.35. 

Along with device density, consumer expectations of the technology are also rising. Almost 80% of consumers in a global study expect a seamless, personalized and unified experience where their house, car, phone and more all talk to each other. They expect emerging technologies like AI to enhance their connected experience. And they expect all this to be delivered without compromising privacy or security. 

There is a similar shift on the supply side of the market too. 

If the emphasis thus far was on getting products into consumers’ homes, the future will be about creating a cohesive experience across all these devices. In this future, services, rather than devices, will determine the value of an IoT vendor. With device margins fading away, the leaders will be determined by their ability to leverage the power of smart home device data to deliver services that represent real value for consumers.  

So a seamless cohesive cross-device experience is what consumers expect and is also what will drive revenue for smart home solution providers. And the first step towards realizing this future will be to address the systemic issue of interoperability in smart homes. 

Interoperability in smart home technologies

Interoperability over brand loyalty, that seems to be the consumer stance according to a report from market research and consulting firm Parks Associates. When it comes it purchasing new devices, more people prioritize interoperability with their current smart home set up over matching brands to their existing products. 


The true smart home is not a loosely connected set of point solutions. It is an integrated ecosystem of smart devices that delivers a seamless and cohesive smart home experience. 

For smart home vendors, interoperability creates the data foundation on which to build and monetize new solutions and services that add value to the consumer experience. Ninety seven percent of respondents to a 2018 online survey of decision-makers in the smart home industry believed that shared data and communication standards would benefit their business. These benefits ranged from the ability to create new solution categories (54%), capture and correlate across richer data sets (43%), focus on core strengths rather than grappling with integration issues (44%) and accelerate adoption (48%).     

There are two fallouts from the limited interoperability standards in the smart home market today. The first is the integration challenges it creates for consumers trying to create a cohesive ecosystem out of an extensive choice set of solutions fragmented by different standards and protocols. 

There are a few ways in which consumers can address this challenge. The rapid rise of smart speakers, the fastest-growing consumer technology in recent times, and voice-enabled interfaces has helped streamline the adoption and simplified integration to a certain degree. The next option is to invest in a dedicated smart home hub, like Insteon Hub and Samsung SmartThings Hub, that ties together and translates various protocol communications from smart home devices. Many of these hubs can now be controlled using Amazon Alexa and Google Assistant voice controls. Universal Control Apps such as IFTTT and Yonomi also enable users to link their devices and define simple rule-based actions with the caveat that they have been integrated by device manufacturers. Many device vendors have also launched “works with” programs to expand compatibility and enable consumers to create a more or less unified smart home solution. 

Though each of these approaches have their merit, collectively they represent an approach to mitigate the symptoms of fragmentation rather than enforce interoperability by design. A shared standard would go a long in addressing the challenges of the current approach to enabling organic interoperability in smart homes. 

OCF and open source, open standard interoperability

OCF (Open Connectivity Foundation) is an industry consortium dedicated to ensuring secure interoperability for consumers and IoT businesses. Its members include tech giants such as Microsoft, Cisco, Intel and appliance majors such as Samsung, LG, Electrolux and Haier.      

For businesses, OCF provides open standard specifications, code and a certification program to enable manufacturers to bring OCF Certified products with broad scale interoperability across operating systems, platforms, transports and vendors. The Foundation’s 1.0 was ratified last year and will soon be published as an ISO/IEC standard. OCF also provides two open source implementations — IoTivity and IoTivity Lite — for manufacturers looking to adopt the ratified standard and maximize interoperability without having to develop for different standards and devices. 

OCF’s latest 2.0 specification introduces several new features including device-to-device connectivity over the cloud, something that was not possible in 1.0 The 2.0 specification will be submitted for ISO/IEC ratification later this year. 

With key partners like Zigbee its now worldwide recognized specification, OCF continues to advance in developing a truly open IoT protocol, equipping developers and manufacturers in the IoT ecosystem with the tools they need to provide a secure, interoperable end user experience.

OCF works with key partners, such as Zigbee, Wi-Fi Alliance, World Wide Web Consortium (W3C), Thread, and Personal Connected Health Alliance (PCHAlliance), and with over 400 members from the industry to create standards that extend interoperability as an operating principle.  

Interoperability, however, is often only the second biggest concern of smart home consumers. The first is security, relating to hacked or hijacked connected home systems, and privacy, relating to how consumer data is collected, utilized and despatched. 

Security & privacy in smart homes

In July this year, there were news reports about a massive smart home breach that exposed two billion consumer records. This was not the result of any sophisticated or coordinated attack and more the consequence of one misconfigured Internet-facing database without a password. It was a similar situation with Mirai attack of 2016 where consumer IoT devices such as home routers, air-quality monitors and personal surveillance cameras were hijacked to launch one of the biggest DDoS attacks ever. Then too, there was no sophistication involved. The attackers simply used 60 commonly used default device credentials to infect over 600,000 devices.  

IoT, including consumer IoT, offers some unique challenges when it comes to security. But the security mindset has yet to catch up with the immensity of the challenge. 

It’s a similar situation when it comes to privacy. Globally, most consumers find the data collection process creepy, do not trust companies to handle and protect their personal information responsibly and are significantly concerned about the way personal data is used without their permission. 

The situation may just be set to change as the first standards for consumer IoT security start to roll in. 

Earlier this year, ETSI, a European standards organization, released a globally applicable standard for Consumer IoT security that defines a security baseline for internet-connected consumer products and provide a basis for future IoT certification schemes. The new standard specifies several high-level provisions that include a pointed rejection of default passwords. The ETSI specification also mandates a vulnerability disclosure policy that would allow security researchers and others to report security issues.

Security is an issue of consumer trust, not of compliance. The smart home industry has to take the lead on ensuring the security of connected homes by adopting a “secure by design” principle. 

Emerging opportunities in smart homes

As mentioned earlier, consumers really expect their smart home experience to flow through to their outdoor routines, their automobiles and their entire daily schedules. Smart home devices will be expected to take on more complex consumer workloads, like health applications for instance, and AI will play a significant role in making this happen. AI will also open up the next generation of automation possibilities for consumers and play a central role in ensuring the security of smart home networks.

Data will play a central role in delivering a unified, personalized and whole-home IoT experience for consumers. Companies with the capability to take cross-device data and convert it into insight and monetizable services will be able to open up new revenue opportunities. However, these emerging data-led opportunities will come with additional scrutiny on a company’s data privacy and security credentials. 

Evaluating Top Three Iot Platforms against Three Critical IoT-Specific Capabilities

The cloud market is currently dominated by three platforms – Amazon Web Services, Microsoft Azure, and Google Cloud Platform – that control nearly 65% of the global market. But as the core cloud computing market matures, new technologies such as artificial intelligence, machine learning and IoT are opening up a new front in a renewed battle for dominance. These upsell technologies could well provide the strategic differentiator that will shake up the current rankings. It is cumbersome to evaluate and compare the capabilities of IoT platforms.

The value of the global IoT platforms market, comprising both cloud-based and on-premise software and services, is estimated at USD 6.11 billion by 2024. The market is currently growing at almost 29% CAGR and CSPs (Cloud Service Providers) have played a key role in lowering barriers to IoT adoption. By standardizing components that can be shared across vertical applications, CSPs are lowering costs, simplifying implementations and empowering customers to experiment with and quickly scale up new use cases. 

CSP IoT offerings are still focused on delivering broad horizontal services with little potential for industry-specific optimizations. But that will change as the market matures and the need for more nuanced and sophisticated solutions opens up. In the meanwhile, let’s find out how the top three cloud platforms fare when it comes to IoT.  

In order to make this a bit more objective, we will be looking at how these platforms perform in terms of three components that are critical for any IoT solution: 

  1. Core IoT
  2. Edge Computing
  3. Data management & analytics

These categories are by no means perfectly mutually exclusive, and there can be a bit of overlap, but they do provide a more like-for-like basis for comparison in terms of fundamental IoT capabilities.  

Core IoT

Amazon Web Services:

AWS IoT Core is a managed cloud service that allows for the easy and secure connection and interaction between devices and cloud applications. It supports billions of devices across multiple industry-standard and custom protocols. The service stores the latest state of every connected device, allowing applications to track, communicate and interact with devices even when they are disconnected. AWS IoT Core allows users to implement new device and application features by simply defining and updating business rules in real-time. The service supports a variety of communication protocols including HTTP, WebSockets, and MQTT.

Authentication and end-to-end encryption across connection points ensures that data is never exchanged between devices and AWS IoT Core without first establishing identity. Users can further secure access by applying policies with granular permissions.

With AWS IoT Core, users can easily connect to a range of other AWS services, like AWS Lambda, Amazon Kinesis, Amazon S3, Amazon SageMaker, Amazon DynamoDB, Amazon CloudWatch, AWS CloudTrail, Amazon QuickSight, and Amazon Elasticsearch Service, without having to manage any infrastructure. 

Microsoft Azure:

Azure IoT offers two frameworks for building IoT solutions to address different sets of customer requirements.  

Azure IoT Central is a fully managed SaaS solution that uses a model-based approach to help users without expertise in cloud-solution development build enterprise-grade IoT solutions. Then there are Azure IoT solution accelerators, a collection of enterprise-grade solution accelerators that can help speed up development of custom IoT solutions. Both these solutions use Azure IoT Hub, the core Azure PaaS. 

The capabilities of Azure IoT Central can be categorized in terms of the four personas who interact with the application. 

The Builder uses web-based tools to create a template for the devices that connect to the IoT application. These templates can define several operational variables such as device properties, behavior settings, business properties and telemetry data characteristics. Builders can also define custom rules and actions to manage the data from connected devices. Azure IoT Central even generates simulated data for builders to test their device templates. 

The Device Developer then creates the code, using Microsoft’s open-source Azure IoT SDKs, that runs on the devices. These SDKs offer broad language, platform and protocol support to connect a range of devices to the Azure IoT Central application.

The Operator uses a customizable Azure IoT Central application UI for day-to-day management of the devices, including provisioning, monitoring and troubleshooting. 

The Administrator is responsible for managing access to the application by defining user roles and permissions. 

Google Cloud Platform:

Google’s Cloud IoT Core is a fully managed service for easily and securely connecting, managing, and ingesting data from millions of globally dispersed devices. There are two main components to the solution, a device manager and a protocol bridge.  

The device manager enables the configuration and management of individual devices and can be used to establish the identity of a device, authenticate the device, and remotely control the device from the cloud. The protocol bridge provides connection endpoints with native support for industry standard protocols such as MQTT and HTTP to connect and manage all devices and gateways as a single global system. 

Google has also launched a Cloud IoT provisioning service, currently in early access, that leverages tamper-resistant hardware-based security to simplify the process of device provisioning and on-boarding for customers and OEMs. 

The Cloud IoT Core service runs on Google’s serverless infrastructure, which scales instantly and automatically in response to real-time changes. 

Edge Computing

Amazon Web Services:

AWS provides two solutions for edge computing, Amazon FreeRTOS to program, deploy, secure, connect, and manage small, low-power edge devices and AWS IoT Greengrass for devices that can act locally on data while still using the cloud for management, analytics, and storage. 

Amazon FreeRTOS is a popular open source operating system for microcontrollers that streamlines the task of connecting small, low-power devices to cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass or even to a mobile device via Bluetooth Low Energy. It comes with software libraries that makes it easy to configure network connectivity options, program device IoT capabilities and secure device and data connections. 

With AWS IoT Greengrass, devices can be programmed to filter device data locally and transmit only the data required for cloud applications. This helps reduce cost while simultaneously increasing the quality of data transmitted to the cloud. AWS IoT Greengrass enables connected devices to run AWS Lambda functions, execute machine learning models and connect to third-party applications, on-premise software and AWS services using AWS IoT Greengrass Connectors. Device programming also becomes extremely easy as code can be developed and tested in the cloud and then be deployed seamlessly to the devices with AWS Lambda. 

Microsoft Azure:

Azure IoT Edge is a fully managed service built on Azure IoT Hub that extends cloud workloads, including AI, analytics, third-party services and business logic, to edge devices via standard containers. For instance, users have the option of leveraging Project Brainwave, a deep learning platform from Microsoft for real-time AI serving in the cloud, to deliver real-time AI to the edge. Processing data locally and transmitting back only the data required for further analysis can reduce the cost and enhance the quality of data. The solution enables AI and analytics models to be built and trained in the cloud before they are deployed on-premise. All workloads can be remotely deployed and managed through Azure IoT Hub with zero-touch device provisioning. 

As with AWS, Azure IoT Edge also offers device management capabilities even when they are offline or with intermittent connectivity. The solution automatically syncs the latest device states when they are reconnected to ensure seamless operability. 

Google Cloud Platform:

Google’s IoT edge service strategy is centered around two components; Edge TPU, a new hardware chip, and Cloud IoT Edge, a software stack that extends Google Cloud AI capabilities to gateways and connected devices.  

Edge TPU is a purpose-built ASIC chip designed and optimized to run TensorFlow Lite ML models at the edge and within a small footprint. Edge TPUs complement Google’s cloud IoT capabilities by allowing customers to build and train machine learning models in the cloud and then run the models on Cloud IoT Edge devices. The combination extends Google Cloud’s powerful data processing and machine learning capabilities to IoT gateways and end devices even while increasing operational reliability, enhancing device and data security and enabling faster real-time predictions for critical IoT operations.

The company is working with semiconductor manufacturers and device makers to embed its IoT edge innovations in the development of intelligent devices and gateways. 

IoT Analytics

Amazon Web Services: 

AWS IoT Analytics is a fully-managed service that automates every stage of the IoT data analytics process. The service can be configured to automatically filter data based on need, enrich data with device-specific metadata, run scheduled or ad hoc queries using the built-in query engine or perform more complex analytics or machine learning interference. Users can also schedule and execute their own custom analysis, packaged in a container, and the service will automate the execution. 

AWS IoT Analytics stores device data in an IoT-optimized time-series data store and offers capabilities for time-series analysis. The company offers a fully managed, serverless, time series data service called Amazon Timestream that can process trillions of events at 1,000X speeds and at one-tenth the cost of conventional relational databases.

AWS also offers real-time IoT device monitoring either as an out-of-the-box feature of its Kinesis Data Analytics solution or as a reference implementation for building custom device monitoring solutions. 

Microsoft Azure:

Azure Stream Analytics is a fully managed serverless PaaS offering designed to analyze and process streaming data from multiple sources simultaneously and in real-time. It integrates with Azure Event Hubs and Azure IoT Hub to ingest millions of events per second from a variety of sources. The service can be configured to trigger relevant actions and initiate appropriate workflows based on the patterns and relationships identified in the extracted information. 

Azure Stream Analytics on Azure IoT Edge enables the deployment of near-real-time intelligence closer to IoT devices to complement big data analytics done in the cloud. A job can be created in Azure Stream Analytics and then deployed and managed using Azure IoT Hub. 

The Microsoft IoT platform also offers Time Series Insights, a fully managed, end-to-end solution to ingest, store and query highly contextualized, IoT time series data. Time Series Insights seamlessly integrates with Azure IoT Hub to instantly ingest billions of events for analytics. Data and insights from this solution can be integrated into existing applications and workflows or new custom solutions can be created with the Time Series Insights Apache Parquet-based flexible storage system and REST APIs.

Google Cloud Platform:

Google Cloud IoT offers a range of services, at the edge and in the cloud, to extract real-time insights from a distributed network of IoT devices. The device data captured by Cloud IoT Core is aggregated into a single global system and published to Cloud Pub/Sub, part of Google Cloud’s stream analytics program, for downstream analytics. Cloud Pub/Sub ingests event streams and delivers them to Cloud Dataflow, a serverless fully managed data transformation and enrichment service, to ensure reliable, exactly-once, low-latency data transformation. The transformed data is then analyzed with BigQuery, a serverless cloud data warehouse with built-in in-memory BI Engine and machine learning.     

Using Cloud IoT Edge, discussed earlier in this article, all these data processing, analytics and machine learning capabilities can then be extended to billions of edge devices. 

Each of these platforms offer a vast range of IoT-specific tools, solutions and services and another layer of complex cloud services and third-party integrations that make it almost impossible to make an exhaustive comparison. But features such as device provisioning & management, real-time streaming analytics and edge computing are capabilities that are critical to every IoT implementation irrespective of application or vertical. Of course, there are other factors, like pricing and security that also come into play. But looking at a platform’s core IoT, edge computing and real-time analytics capabilities, affords a like-to-like comparison provides the context for a more detailed drill down. 

Enterprise IoT: Why Securing IoT Devices Needs to Be the Number One Priority

The number of IoT devices around the world keeps on growing. Globally, there are now more than 26 billion connected devices, according to research from Statista – up from 15 billion in 2015 – with the number projected to rise to over 75 billion by 2025. In 2018, the global IoT market stood at about $164 billion, and is expected to increase almost tenfold over the next six years, reaching around $1.6 trillion by 2025. The popularity of IoT technology is drastically transforming how society functions and how businesses are run. Be it manufacturing, transportation, telecoms, logistics, retail, insurance, finance or healthcare, the vast proliferation of IoT technology is on course to disrupt practically every industry on the planet. However, as more and more IoT devices are deployed across the enterprise, new challenges emerge for developers – and securing IoT systems is chief among them. 

(Image source:

IoT in the Enterprise

Although much media attention surrounding IoT has focused on consumer products – smart speakers, thermostats, lights, door locks, fridges, etc. – some of the most exciting IoT innovations are coming from the business sector. The combination of sensor data and sophisticated analytical algorithms is allowing companies in a broad range of industries to streamline operations, increase productivity, develop leading-edge products, and solve age-old business problems. Consider the performance of all types of equipment and machinery – from jet engines to HVAC systems – being constantly monitored with sensors to predict the point of failure and avoid downtime automatically. Or how about driver speed behavior information being shared in real-time with an insurer – or geolocation beacons pushing targeted advertisements and marketing messages to customers when they are in or near a store. Usage of data from IoT sensors and controllers for better decision making – combined with automation for better efficiencies – is enormously valuable. As such, more and more businesses are getting on board with the IoT revolution.  

84% of the 700+ executives from a range of sectors interviewed for a Forbes Insights survey last year said that their IoT networks had grown over the previous three years. What’s more, 60% said that their organizations were expanding or transforming with new lines of business thanks to IoT initiatives, and 36% were considering potential new business directions. 63% were already delivering new or updated services directly to customers using the Internet of Things.   

By industry, nearly six in ten (58%) executives in the financial services sector reported having well-developed IoT initiatives, as did 55% of those in healthcare, 53% in communications, 51% in manufacturing, and 51% retail. 

(Image source:

The survey also showed that leveraging IoT as part of a business transformation strategy increases profitability. 75% of leading enterprises credited IoT with delivering increased revenue. 45% reported that the Internet of Things had helped boost profits by up to 5% over the previous year, another 41% said that it had boosted profits by 5% to 15%, and 14% had experienced a profit boost of more than 15% – and all anticipated IoT to have a significant profit-boosting impact in the year ahead. 

(Image source:

However, key to profitability and business success with IoT technology is security. Indeed, along with developing/maintaining appropriate algorithms/software and speed of rollout, securing IoT was as one of the three top IoT challenges cited by the executives. How do organizations ensure the integrity of their IoT data? How do they ensure that the various operational systems being automated with the technology are controlled as intended? These are questions that need to be answered, for a lot of hard IoT security lessons have been learned in recent years. 

Securing IoT in the Enterprise – An Ongoing Challenge 

As the number of connected IoT devices in the enterprise increases, new threats emerge. Distributed Denial of Service (DDoS) attacks provide a number of high-profile examples. Here, vulnerable connected devices are hijacked by hackers and used to send repeated and frequent queries that bombard the Domain Name Server (DNS), causing it to crash. For instance, the Mirai botnet in 2016 shut down major internet providers in North America and Europe by taking over hundreds of thousands of IoT devices – mainly IP security cameras, network video recorders and digital video recorders – and using them for a DDoS attack.

Mirai was able to take advantage of these insecure IoT devices in a simple but clever way – by scanning big blocks of the internet for open Telnet ports, then attempting to log in using 61 username/password combinations that are frequently used as the default for these devices and never changed. In this way, it was able to amass an army of compromised CCTV cameras and routers to launch the attack. Perhaps most concerning of all, however, is that the Mirai botnet source code still exists “in the wild”, meaning that anyone can use it to attempt to launch a DDoS attack against any business with IoT implementations – and many cybercriminals have done just that. 

Another example involves a US university in 2017, which suddenly found over 5,000 on-campus IoT devices – including vending machines and light bulbs – making hundreds of DNS queries every 15 minutes to sub-domains related to seafood. The botnet spread across the network and launched a DDoS attack, resulting in slow or completely inaccessible connectivity across the campus. Again, it was weak default passwords that left these devices vulnerable. 

One of the main problems with IoT devices being used in workplace environments is that many are not inherently secure. Part of the issue is that there are literally thousands of individual IoT manufacturing companies – many of which started life in the consumer market – with very little consistency between them. What this means is that each IoT device that ends up in the workplace – be it a lightbulb, vending machine, or CCTV camera – will likely have its own operating system. Each will likely have its own security setup as well – which will be different from every other connected thing in the office – and a different online dashboard from which it is operated. Many of these devices are also shipped with default usernames and passwords, making them inherently hackable. The manufacturers, meanwhile, take little or no responsibility if any of these devices are hacked, meaning the onus for securing IoT in all its forms falls entirely upon an organization’s IT department – and too often no one is assigned to this critical task. 

What makes it so critical? Well, thanks to Shodan – a specialized search engine that lets users find information about IoT devices (including computers, cameras, printers, routers and servers) – anyone, including hackers, can locate devices that use default logins with a simple web search. However, what’s good for hackers can be seen as being good for enterprises, too. Though the very existence of Shodan is perhaps scary, IT professionals should be using the search engine proactively as a security tool to find out if any information about devices on the company’s network is publicly accessible. After that, securing IoT is down to them. 

(Image source:

Another issue that renders securing IoT devices absolutely essential is the threat of spy tech and ransomware. Many IoT devices incorporate microphones, cameras, and the means to record their location, leaving organizations vulnerable to sensitive data being stolen or company secrets being exposed and held to ransom. Things like IoT-enabled building management systems can also be left open to surveillance or meddling from malicious third parties. A hacker could, for instance, lock all the doors in an office building or cut all the power. As an example, researchers at Def Con demonstrated how such a system can be targeted with ransomware by gaining full remote control of a connected thermostat. In a real-life scenario, such an attack could result in an office becoming uninhabitable, opening an organization up to ransom demands to regain control. 

In short, with the ever-increasing number of IoT devices an organization relies upon, the attack surface grows in kind – as does the unpredictability with regards to how hackers may seek to exploit them.

The Huge Costs of Not Securing IoT 

Securing IoT should be a top priority for practically all businesses for the simple reason that practically all businesses are invested in IoT. In fact, according to recent research from DigiCert – State of IoT Security Survey 2018 – 92% of organizations report that IoT will be important to their business by 2020. The executives interviewed cited increased operational efficiency, improving the customer experience, growing revenue, and achieving business agility as the top four goals of their IoT investments. 

(Image source:

However, securing IoT remains the biggest concern for 82% of these organizations. And it’s no wonder – a full 100% of bottom-tier enterprises (i.e. enterprises that are having the most problems with IoT security issues) had experienced at least one IoT security incident in 2018. Of these, 25% reported related losses of at least $34 million over the previous two years. 

(Image source:

These bottom-tier companies are much more likely to experience data breaches, malware/ransomware attacks, unauthorized access/control of IoT devices, and IoT-based DDoS attacks than top-tier companies (i.e. companies that are best prepared in terms of IoT security). So – what are top-tier companies doing differently? Well, DigiCert found that they all had five key behaviors in common – they were all ensuring device data integrity (authentication), implementing scalable security, securing over-the-air updates, utilizing software-based key storage, and encrypting all sensitive data. 

Speaking to Security Now, Mike Nelson, Vice President of IoT Security at DigiCert, comments on the findings: “The security challenges presented by IoT are similar to the many IT and internet security challenges industries have faced for years. Encryption of data in transit, authentication of connections, ensuring the integrity of data – these challenges are not new. However, in the IoT ecosystem these challenges require new and unique ways of thinking to make sure the way you’re solving those challenges works. Regarding evolution of security challenges, the biggest challenge is simply the scale and the magnitude of growth. Having scalable solutions is going to be critical.”

(Image source:

Final Thoughts

IoT has the potential to open up many new opportunities for growth and agility within the enterprise. However, securing IoT devices remains absolutely crucial. Organizations need to take the necessary steps to ensure that their devices and data are adequately protected from end to end. This will involve conducting a thorough review of the current IoT environment, evaluating the risks, and prioritizing primary security concerns that need to be addressed. Strong and unique passwords must also be mandatory for every device. Firmware must be constantly updated, and only secure web, mobile and cloud applications with strong encryption and data protection features must be used. All data must be encrypted – both at rest and in transit – with end-to-end encryption made a product requirement for all devices that connect. It’s also important that this data is secured and processed securely after it has been transmitted across the network. Device updates must be monitored and managed around the clock and around the calendar. Finally, the security framework and architecture must be scalable to support IoT deployments both now and in the future. As such, working with third parties that have the resources and expertise to manage scaling IoT security programs will be invaluable. 

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and secure IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, creating new, highly-secure connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more. 

Making sense of IoT connectivity protocols

IoT, abbreviated from the Internet of Things, refers to a connected system of devices, machines, objects, animals or people with the ability to autonomously communicate across a common network without the need for human-to-human or human-to-computer interaction. This relatively recent innovation is already revolutionizing a lot of sectors with its ability to add connected intelligence to almost everything, including smart homes, smart automobiles, smart factories, smart buildings, smart cities, smart power grids, smart healthcare, smart agriculture and smart livestock farming, to name just a few.  

IoT is still a nascent innovation but it has an evolutionary trail that leads back, as per the ITU (International Telecommunications Union), to the early years of the last century. 

A brief history of telemetry, M2M and IoT

A 2016 Intro to Internet of Things presentation from the ITU charts the legacy of the modern IoT revolution back to 1912, when an electric utility in Chicago developed a telemetry system to monitor electrical loads in the power grid using the city’s telephone lines. The next big milestone, wireless telemetry using radio transmissions rather than landline infrastructure, was passed in 1930 and used to monitor weather conditions from balloons. Then came aerospace telemetry with the launch of Sputnik in 1957, an event widely considered the precursor to today’s modern satellite communications era.     

At this point M2M as we know it was still some years away, awaiting two landmark breakthroughs across three decades apart to propel it into the mainstream. 

The first breakthrough came in 1968 when Greek-American inventor and businessman Theodore G. Paraskevakos came up with the idea of combining telephony and computing, the theoretical foundation to modern M2M technologies, while working on a caller line ID system. The second happened in 1995 with Siemens launching the M1, a GSM data module that allowed machines to communicate over wireless networks. From thereon, regular improvements in wireless connectivity, and the Federal Communications Commission’s advocacy for the use of spectrum-efficient digital networks over analog networks, paved the way for more widespread adoption of cellular M2M technologies.  

IoT is the most recent mutation in this extended evolutionary chain of autonomous machine-to-machine connectivity. However, though both approaches share the same foundational principles, there are some marked differences as shown in the chart below. 


Perhaps one of the most significant distinctions between M2M and IoT is in terms of ambition and scope. Current estimates indicate anywhere between 22 and 25 billion connected IoT devices by 2025. But before we have even tapped into the potential of networking billions of physical objects, industry aspirations are visualizing an Internet of Everything, where not just objects and devices but everything, including people, process data and things are connected into one seamless and intelligent ecosystem. 

But whatever the breadth of the ambitions for IoT, the availability of quality connectivity options will eventually determine the value of the outcome. Today there are an overwhelming range of connectivity technologies on offer with a range of capabilities suited for different IoT applications. 

Classifying IoT connectivity technologies

When it comes IoT connectivity, technology is constantly changing, with existing options being constantly updated and upgraded and new alternatives being continually introduced. And given the diversity of the IoT applications market, available solutions can be classified across a complex matrix of characteristics including range, bandwidth, power consumption, cost, ease of implementation, security, etc. But it is possible to classify these solutions using a simple 4 part taxonomy, namely:

  1. Classic connectivity solutions, comprising traditional short-range wireless solutions 
  2. Non-cellular IoT, proprietary technologies deployed industry players/consortia.
  3. Cellular IoT, standardized technologies that operate in the licensed spectrum.
  4. Satellite IoT, for areas that cannot be covered by any of the above. 

Both cellular and non-cellular IoT technologies fall under the broad, and rather self-explanatory, category of low-power wide-area networks or LPWANs. While the former is a standardized technology provided in the licensed spectrum by mobile network operators, the latter refers to private proprietary solutions operating in unlicensed radio frequencies. Both solutions, however, are purpose-designed for IoT and are capable of transmitting small packets of data across long distances, over an extended period, with very limited resource usage. The forecast for LPWAN technologies is that they will cover 100 percent of the global population by 2022. 

1. Classic Connectivity: 

There are a range of technologies that fall under this category, including Wi-Fi, Bluetooth and Bluetooth Low Energy, NFC, RFID, and mesh technologies such as ZigBee, Thread and Z-Wave. As mentioned earlier, these are all short-range solutions that are ideal for bounded environments such as smart homes for example. But if short-range seems like a limitation, these solutions make up for it by enabling high bandwidth transmissions at low power consumption rates. Most of these solutions may not be designed specifically for IoT. But as long as the requirement does not include long-distance data transmission, they could still serve as a crucial hub in a larger hybrid IoT environment. 

 2. Non-Cellular IoT: 

There are currently two popular LPWAN solutions, LoRaWAN and Sigfox, in this space.  

  • LoRaWAN is an open IoT protocol for secure, carrier-grade LPWAN connectivity. It is backed by the LoRa Alliance, a global nonprofit association of telecoms, technology blue chips, hardware manufacturers, systems integrators, and sensor and semiconductor majors. 

The protocol wireless connects battery operated ‘things’ to the internet, enabling low-cost, low-power, mobile and secure bi-directional communication. The solution can also scale from a single gateway installation to a global network of devices across IoT, M2M and other large-scale smart applications. Though the LoRaWAN protocol defines the technical implementation, it does not place any restrictions on type of deployment, giving customers the flexibility to innovate. One of the arguments challenging the technology’s open-standard credential has focused on implementation being tied to chips from LoRa Alliance member Semtech. However, other suppliers have recently announced an interest in adopting LoRa radio technology. 


LoRaWAN already has a massive global footprint with over a 100 network operators having deployed its networks across the world by the end of 2018. The alliance also announced that it has tripled the number of end-devices connecting to its networks.    

  • Sigfox was one of the first companies to create a dedicated IoT network that used Ultra Narrow Band modulation in the 200 kHz public band to exchange radio messages over the air. The company’s stated ambition is to mitigate the cost and complexity of IoT adoption by eliminating the need for sensor batteries and reducing the dependence on expensive silicon modules. 

The company’s proprietary protocol is designed for IoT applications that transmit data in infrequent short bursts across long distances, while ensuring low connectivity costs, and reducing energy consumption. It works with several large manufacturers such as STMicroelectronics, Atmel, and Texas Instruments for its endpoint modules in order to ensure the lowest cost for its customers.

The Sigfox network is currently operational in 60 countries, covering an estimated 1 billion people worldwide, connecting 6.2 million devices and transmitting 13 million messages each day. Sigfox has also teamed up with satellite operator Eutelsat to launch a satellite that will enable global coverage.  

There are a few other players, like Link Labs and Weightless SIG, offering their own LPWAN technologies. But LoRaWAN and Sigfox dominate the market, accounting for nearly two-thirds of low-power wide-area networks deployments. 

There is, however, a significant challenge emerging from their counterparts in cellular IoT with technologies like NB-IoT and LTE-M. 

3. Cellular/Mobile IoT: 

Proprietary technologies operating in the unlicensed spectrum may seem to have the market cornered, but cellular/mobile IoT is rapidly catching up.  Earlier this year the GSMA announced the availability of mobile low-power wide-area IoT networks in 50 markets around the world with a total of 114 launches as of May 2019. 


These launches include both LTE-M (LTE Cat-M/eMTC) and NarrowBand IoT (NB-IoT/LTE Cat-NB), a set of complementary, IoT-optimized cellular standards developed by the 3GPP (3rd Generation Partnership Project). Both these Mobile IoT networks are ideal for low-cost, low-power, long-range IoT applications and together they are positioned to address the entire spectrum of LPWAN needs across a range of industries and use cases. Operators have a choice of cellular technologies to ensure that they can provide clearly differentiated IoT services based on the market dynamics in their regions. And both these technologies can coexist with 2G, 3G and 4G networks. There are, however, some key distinctions between the two, stemming primarily from the focus on covering as wide a range of IoT applications as possible.  

  • LTE-M (Long Term Evolution for Machines) enables the reuse of existing LTE mobile network infrastructure base while reducing device complexity, lowering power consumption and extending coverage, including better indoor penetration. LTE-M standards are designed to deliver a 10X improvement in battery life and bring down module costs by as much as 50 percent when compared to standard LTE devices. 

One significant development in the LTE-IoT market has been the launch of the MulteFire Alliance, a global consortium that wants to extend the benefits of LTE to the unlicensed spectrum. The group’s MulteFire LTE technology is built on 3GPP standards and will continue to evolve with those standards but operates in the unlicensed or shared spectrum. The objective is to blend the benefits of LTE with ease of deployment. Key features of the latest MulteFire Release 1.1 specifications include optimization for Industrial IoT, support for eMTC-U and NB-IoT-U, and access to new spectrum bands.  

  • NarrowBand IoT or NB-IoT is based on narrow band radio technology and is targeted at low-complexity low-performance cost-sensitive applications in the Massive IoT segment.  The technology is relatively easier to design and deploy as it is not as complex as traditional cellular modules. In addition, it enhances network capacity as well as efficiency to support a massive number of low throughput connections over just 200khz of spectrum. NB-IoT can also be significantly more economical to deploy compared with other technologies as it eliminates the need for gateways by communicating directly with the primary server.

Both these technologies are already 5G-ready. They will continue to evolve to support 5G use cases and coexist with other 3GPP 5G technologies.

The race for 5G deployments has already begun in earnest. Following the launch of 5G services in South Korea and the US earlier this year, 16 more markets are expected to join this as yet exclusive club in 2018. 

The emergence of 5G, the fifth generation of wireless mobile communications, will no doubt have a major impact on how these services are delivered. These fifth generation networks, with their promise of higher capacity, lower latency and energy/cost savings, have the potential to support more innovative bandwidth-intensive applications and massive machine-type communications (mMTC). 

4. Satellite IoT:

This is ideal for remote areas that are not covered by cellular service. Though that may seem like a niche market, some reports indicate that there may be as many as 1,600 satellites dedicated to IoT applications over the next 5 years. Satellite communications company Iridium has partnered with Amazon Web Services to launch Iridium CloudConnect, the first satellite-powered cloud-based solution for Internet of Things (IoT) applications. 

All of which brings up the question, which IoT protocol is right for you? Every technology discussed here has its USPs and its limitations. Every IoT application has its own requirements in terms of data rate, latency, deployment cost etc. A protocol that works perfectly well for a particular use case may prove to be completely inadequate for another. 

So there is no one-size-fits-all protocol that can be prescribed by application or even by industry. As a matter of fact, sticking to just one technology standard doesn’t make sense in many Internet of Things (IoT) implementations, and that’s according to Sigfox.