Why CloudTern Chose Kubernetes for Container Orchestration?
In the traditional software development environment, creating an application was a simple process of writing the code. However, the rapid innovation that has brought-in a myriad of technologies, tools, frameworks, architecture and interfaces adds enormous complexity to application development environments. The advent of smartphones has opened up another world of mobile computing environment which adds up to this challenge. Developers now have to consider all these aspects while creating an application. Containerization solves all these challenges enabling developers to focus on just the application and not worry about runtime environment differences.
An Overview of Containerization
A container is a standalone and portable software unit that is packaged with code and its entire runtime environment such as binaries, libraries, dependencies, configuration files etc. By abstracting away the underlying infrastructure, OS and platform differences, containers facilitate seamless movement of applications between different computing environments. Right from a large enterprise application to a small microservice, containerization can be applied to any type of application or service. The absence of the OS image makes containers lightweight and highly portable.
The Evolution of Containerization
Containerization is not a new concept and has been around for decades. Unix OS Chroot was the first system that implemented containerization, providing disk space for each process. Derrick T. Woolworth extended this feature in 2000 wherein he added a sandboxing feature for file system isolation in FreeBSD OS. While Linux implemented this feature in its VServer in 2001, Solaris released containers for x86 in 2004. Similarly, Google introduced Process Containers in 2006 to isolate resources. Linux introduced container manager, LXC in 2008. CloudFoundry introduced LXC in Warden which was able to run on any operating system. Google introduced Linux app containers in 2013 which was called lmctfy. However, containerization gained widespread adoption with the advent of Docker in 2013.
Virtual Machines Vs Containers
Containers are often confused with virtual machines. Containers and virtual machines share a lot of similarities in terms of resource isolation and allocation but differ in the functionality. A virtual machine is created by abstracting physical resources from a machine and deployed to run in an isolated computing environment to deliver the functionality of a computing device. Each virtual machine contains the copy of the operating system and all the dependencies of the application running on it. A hypervisor is used to run multiple VMs on a single machine. As it contains the full copy of OS, it is larger in size and takes more time to boot.
While a VM virtualizes hardware resources, a container virtualizes the operating system. Multiple containers share the same OS kernel and run in isolation on the same machine. As there is no OS, containers are lightweight, portable, run more applications and take less time to boot. By combining both these technologies, organizations can gain more flexibility in managing and deploying a range of applications.
Benefits of Containerization
Containers bring amazing benefits to organisations. Here are a few of them:
Highly Portable
While the absence of a full OS copy in a container makes it light-weight, the abstraction of underlying infrastructure makes it highly portable. It means, containers can be easily deployed in an on-premise data center, public cloud or on any individual laptop. Containers run on Windows, MAC, Linux, virtual machines or even on bare metals, offering higher flexibility for development and deployment of applications.
Improved Efficacies and Increased Productivity
Containers clearly define the role of developers and operations teams. With language runtimes, software libraries and dependencies, containers assure predictable and consistent environments, regardless of where the applications run. As such, operations and development teams can stop worrying about software differences across environments and focus more on improving performance of apps, resulting in more productivity and efficacies.
Faster and Better Application deployment
Containerization significantly improves the build, test and deployment of applications. Compared to virtual machines that take minutes to load, containers can be spinned up within seconds. They share a single OS kernel, boot much faster and consume less memory. By packaging an app along with its dependencies into isolated software units, containers facilitate easy replication of apps on multiple machines across the clusters, rapid deployment and scaling.
Docker – A Synonym for a Container
Docker is an open-source tool that helps both development and operations teams in building, managing and deploying containers with ease. Docker was originally created for Linux but now supports MAC and Windows environments. Docker Engine is a runtime environment that lets you build and run containers and store these images in Docker Hub container registry.
As a leading cloud solutions company, CloudTern manages containerization needs for multiple companies. Docker offers the flexibility to integrate it with major infrastructure automation and configuration management solutions such as Puppet, Chef, Ansible, SaltStack etc. or independently manage software environments. In addition, Docker allows us to integrate it with the CI/CD pipeline and run multiple development environments that are similar to real-time production environments on a single machine or try different configurations, servers, and devices etc. for running test suites. As such, our clients were able to deploy software more frequently and recover faster while significantly reducing the change failure rate.
While there are other container management tools such as RKT, Canonical, Parallels etc., Docker is the most popular tool that has now become a synonym for a container. The fact that Docker can be used on any operating system or cloud makes it the first choice for many. At CloudTern, we proactively monitor technology changes and offer the best IT solutions for our clients. So, Docker is our first choice for all containerization needs.
Why Container Orchestration?
Looking at the significant benefits offered by containers, several organizations are now implementing container technology into their CI/CD environments. As containers are quick to spin up, lightweight and portable, thousands of containers are created and deployed across the infrastructure. A typical IT infrastructure runs hundreds of containers that come with a shorter lifespan which pose great complexity in infrastructure monitoring. You need to closely monitor and manage them to know what’s running on each server. This is where cloud orchestration tools come to the rescue.
Kubernetes, Mesosphere and Docker are the most popular cloud orchestration tools.
An Overview of Kubernetes
Kubernetes is the most widely used container orchestration tool in recent times. Kubernetes was developed by Google and released in 2014. It is now managed by Cloud Native Computing Foundation (CNCF). Kubernetes allows organizations to easily automate deployment, scaling and management of container applications across a cluster of nodes. It is a standalone software that can independently manage containers without Docker or work with Docker in tandem.
A Quick Overview of Kubernetes Architecture
The kubernetes architecture consists of two core components:
- Nodes (bare metals or virtual machines): Nodes are again divided into two components:
- Master: A master node is where the Kubernetes is installed. The Master node controls and manages scheduling of pods across worker nodes where the application runs while maintaining the state of the cluster at its predefined state. Multiple master nodes are implemented to maintain high availability. Here are the key components of a master node.
- Kube-contoller-manager: It is responsible to maintain the desired state of a cluster by listening to the kube-apiserver about the information of the current state.
- Kube-scheduler: It is the service that schedules events and jobs across the cluster based on the availability of resources of predefined policies via the kube-apiserver.
- Kube-apiserver: It is the API server that enables UI dashboards and CLI tools to interact with Kubernetes clusters.
- Etcd: It is the master node storage stack that contains definitions, policies, state of the system.
- Worker Node: This is where the actual application runs. It contains the following components:
- Docker: It contains the Docker engine to manage containers.
- Kubelet: It receives instructions from the master node and executes them while sending information about the state of the node to the master.
- Kube-proxy: This service facilitates communication between microservices and pods within the cluster as well as connect the application to the outside world.
- Master: A master node is where the Kubernetes is installed. The Master node controls and manages scheduling of pods across worker nodes where the application runs while maintaining the state of the cluster at its predefined state. Multiple master nodes are implemented to maintain high availability. Here are the key components of a master node.
- Pods: A pod is a Kubernetes basic unit of deployment. All containers required to co-exist will run in a single pod.
Why CloudTern Chose Kubernetes?
As a leading cloud managed Services Company, CloudTern handles cloud networks of multiple organisations. A typical IT network comprises multiple nodes that can be anything from virtual machines to bare metals. Multiple nodes are implemented by IT administrators for two important reasons. Firstly, high availability is a key requirement for cloud-based services wherein the application should always be available to users even when a node is down. So, a robust infrastructure has to be set up. Secondly, scalability is a key concern. As the application traffic increases, more containers should be dynamically added or removed on-demand. Multiple containers of an application should talk to each other as well.
Docker Swarm is a container orchestration tool offered by Docker. It uses Docker API and works in tight integration with Docker. However, CloudTern chose Kubernetes because Kubernetes efficiently co-ordinates a large cluster of nodes and scales better in production compared to Docker that runs only on a single node. It helps you manage and orchestrate container resources from a central dashboard.
Kubernetes securely manages networking, load-balancing and scales well. In addition, it allows you to group containers based on a criteria such as staging environments or implement access permissions. So, it eliminates the need to mock up the entire microservices architecture of an application for the development team. You can deploy software across pods in a scale-out manner and scale in deployments on-demand. It gives clear visibility into the deployment process wherein you can check the completed, in-process and failed deployments from a single pane. You can save time by pausing and resuming a deployment at your convenience. The version control feature allows you to update pods with latest images of the application and roll back to a previous one, if needed.
With support for 5000 nodes and 300,000 containers, Kubernetes works well for organizations of all sizes. Combined with Docker, Kubernetes offers a highly scalable cloud orchestration system delivering fast and reliable applications. Kubernetes enjoys a large and vibrant community which means you can always be up to date with what’s happening with the tool or get help to resolve any issues.
The Bottom Line
Kubernetes is not just a personal choice. Today, Kubernetes is the market leader in container orchestration. According to StackRox, Kubernetes market adoption reached 86% by Spring 2019. These market statistics once again affirm the fact that CloudTern always offers the right tools for the right IT tasks.
References
Beacon technology for Asset Tracking
The advent of Internet of Things (IoT) has not only revolutionized IT networks but it also paved way for a range of new and innovative technologies. The beacon technology is one among them. Right from the time when Apple introduced the beacon technology in 2013 to till date, the technology has greatly evolved and is getting better every day. While the beacon technology was initially used by retail businesses, its functionality is now being extending to every field. Asset tracking with beacons is the new trend. Using the beacon technology, businesses are now able to implement cost-effective and highly scalable asset tracking solutions.
An Overview of Beacons
A beacon is a small Bluetooth-based device that is used to continuously transmit radio signals. This small form factor device contains a small radio transmitter and a battery. It uses the Bluetooth Low Energy(BLE) protocol to transmit data. As BLE consumes low energy, you can run beacons without draining out the battery. Depending on the size and functionality of the device, beacon batteries can last from 6 months to 5 years. However, it transmits low amounts of data which means you can’t transfer audio or streaming data.
The beacon technology is similar to Near-field Communication (NFC) technology. However, the difference lies in the range. While NFC functions within 8 inches, beacons can work within a range of 70 meters.
The State of Beacon Market
Beacons have become the first choice for many companies when it comes to Real-Time Location System (RTLS) solutions. According to Grandview Research, the global Bluetooth beacon market is expected to reach $58.7 billion by 2025, growing at a CAGR of 95.3% between 2017 and 2025. Similarly, Allied Market Research reports that the global beacon market would reach $14.839 billion by 2024, growing at a CAGR of 61.5% between 2018 and 2024. GM Insights reports that the market value of beacons was $170 million in 2016. This value is expected to grow at a CAGR of 80% between 2017 and 2024. The retail industry is the largest market for beacons followed by the health sector.
Source: https://www.statista.com/statistics/827293/world-beacons-technology-market-revenue-by-end-user/
Analyst firm Statista reports that the global beacon market was valued at $519.6 million in 2016. This value is expected to reach $56.6 billion by 2026, growing at a CAGR of 59.8% between 2016 and 2026.
The Technology behind Beacons
Beacons perform a single task. They just send a radio signal at pre-defined intervals. BLE-enabled devices such as smartphones receive these signals and act accordingly. Each beacon is assigned with a unique identifier. So, the device transmits this unique identifier which enables the receiver to identify the location of the beacon as well as the location of the user.
A beacon contains a small ARM (Advanced RISC machines) computer, Bluetooth connectivity module and a small battery. This small CPU runs the firmware written in low-level programming that controls the behaviour of the beacon. As the beacon’s job is to transmit its identifier, the small CPU power and battery is more than sufficient to process this data or encrypt the identifier. Inside the beacon you will find a small antenna that transmits electromagnetic waves. It uses the Bluetooth protocol. The latest Bluetooth standard is 4.2. The normal frequency of the beacon radio waves is 2.4 GHz and the maximum data limit for the 4.2 standard is 257 bytes. With such small amount of payload, beacons transmit the UUID, major, minor and the signal power. The receiving devices calculate the proximity of the beacon based on the transmitted signal power.
Beacon transmits the following components
Universally Unique Identifier (UUID): It is the unique identifier that differentiates your beacons from other devices outside your network.
Major Value: It is the unsigned integer value that tells you about the group in which the beacon is placed. For instance, beacons installed in the 1st floor will have the same major value. The value can be anything between 1 and 65535.
Minor Value: It is the unsigned integer that differentiates a beacon from a group. The value falls between 1 and 65535.
Here is an example of a UUID:
f626db6-3ga2-4e98-8013-bc5b71f0983c
When you talk about a beacon, you think about a physical device. However, some smartphones can act as a transmitter as well as a receiver. For instance, Apple doesn’t offer any physical beacon. It has incorporated the beacon technology into iOS 7 operating system. With more than 200 million iOS 7 devices in the market, Apple already has a considerable amount of beacons in the market.
How are beacons useful?
Beacons don’t relay any important message. They simply relay their IDs. It is the job of the receiver device to apply this information into a useful solution. For instance, a retail mall installs beacons inside the mall. When a customer visits the store and browses the electronic section of the mall, the beacon installed at that place will transmit its ID. The app in the smartphone of the customer will receive the ID and identifies the location of the customer. In this case, the app identifies that the customer is at the electronics section. So, the app will send discounts and offers related to the electronic products of that mall. Moreover, these offers would be specific and customized for that customer.
Asset tracking with Beacons
Asset tracking and management is a key requirement for any industry. Beacons can significantly reduce the cost and complexity of this job. There are multiple ways to track assets using beacons. For instance, you can mount BLE receivers in a permanent fixture and tag assets to beacons. When an asset comes into proximity of a BLE-enabled receiver, it tracks the movement via mobile data or Wi-Fi and logs the data. You can either take action or store the information for management and analytics purposes. Using beacons, you can cost-effectively track thousands of assets in real-time, 24/7.
There are instances wherein you cannot mount BLE receivers in permanent fixtures in temporary locations such as conference halls or function halls. In such cases, you can fix beacons in different places and track assets using a mobile app. By tagging assets to beacons, you can track each asset from the mobile app. Implementation is easy as there is no need for wiring or costly installation.
For more accuracy and maximum coverage, you can augment the beacon setup with additional receivers. You can install fixed beacons and fixed BLE receivers and augment them with moving beacons and moving BLE receivers. This setup can be extremely useful in low-signal areas such as ICUs of a hospital or a high-security airport check points. In areas such as large construction sites, environments quickly change. As such, you need a dynamic beacon architecture. In such cases, you can complement beacons with GPS and Wi-Fi. Depending on your environment, business type and requirement, you can choose the right deployment beacon technology.
Asset tracking with beacons is quickly gaining traction. Reports show that beacons have significantly reduced operational costs of asset management tasks. According to a Proximity Directory report, a total of 15,176,500 proximity sensors were installed globally in Q2, 2017. And, asset tracking with beacons is saving billions of dollars for the $9.1 billion logistics industry. Similarly, the health industry can save hundreds and thousands of dollars with an ROI of 275% by using asset tracking with beacons.
The advantages of beacons are enormous. Using beacons, you can track every item in the warehouse, track vehicles within the infrastructure, track equipment and machinery in a healthcare location, track luggage trolleys in airports, railway stations etc. In addition, you can track people/employees by giving them BLE-enabled devices.
Bluetooth 5.0 is offering additional capabilities in the form of 2x speed, 800% more broadcast messaging capacity and 4x range. As such, beacons are sure to disrupt the RTLS solutions in the coming days.
IoT with Private LTE Networks is disrupting the Industrial Vertical – Is your business ready for this innovation?
In Feb 2017, Qualcomm along with GE and Nokia announced a successful demonstration of private LTE for industrial IoT market which created a great buzz in the industrial IT circles. This team has conducted live field trails to continue the research based on this demonstration throughout 2017 by integrating their platforms and technologies. While Nokia provided the base infrastructure and the Nokia Digital Automation Cloud Service for a private network, Qualcomm offered the wireless connectivity with chipset devices and GE integrated these solutions onto its Industrial IoT platform Predix for a better control of industrial resources across the infrastructure. GE leverages the Predix platform with a private LTE network setup at GE headquarters. This demonstration brings private LTE networks into the limelight.
An Overview of LTE Networks
Long Term Evolution (LTE) is an innovative high-speed wireless network technology that comes with low latency and high speeds over long distances for data terminals and mobile devices via UMTS technologies. It was designed by 3rd Generation Partnership Program (3GPP) as a 4G network communications standard to offer higher speeds for wireless networks; ten times faster than 3G networks. With IP-based voice, data and multimedia streams, LTE networks offer speeds between 10 Mbit per second and 1 Gbit per second and is easy to deploy and use. They can be deployed on different frequency bandwidth blocks. The current LTE networks support 1.4MHz, 3MHz, 5MHz, 10 MHz, 15MHz, and 20MHz. every 5MHz spectrum of each cell support around 200 active connections.
How does it work?
LTE uses two different air interfaces for downlink and uplink purposes. For the downlink from the tower to the device, LTE uses Orthogonal Frequency Division Multiple Access (OFDMA) air interface which allows multiple connections on a single cell and increased throughput of a connection. Similarly, for the uplink from the device to the tower, LTE uses Discrete Fourier Transform Spread Orthogonal Frequency Division Multiple Access (DFTS – OFDMA) air interface. With varying air interfaces for downlink and uplink, LTE networks optimize connections with better battery life for the devices.
LTE for Industrial IoT
LTE technology is not new but it gained momentum with the advent of the Internet of Things (IoT). While the cloud computing revolutionized the IT world and centralized resources, it paved a way for BYOD networks wherein mobile devices entered the network. The next big step was Internet of Things wherein sensors and other industrial devices joined the IT networks. With IoT, organizations are able to facilitate communication between a range of devices including IT and non-IT devices.
The main challenge for Industrial IoT is connectivity issues. More often, industrial companies have their temporary facilities located in remote places wherein reliable network connection is not easy to obtain. Whether it is an oil excavation site, mining site or a power plant, the regular network might provide basic connectivity but not the required speed and efficacy. As industrial infrastructure is now equipped with robotics, automated guided vehicle (AGV), drones, Augmented reality and virtual reality (AR/VR), high-speed and low latency data transfer becomes a critical requirement. The data should be stored locally while having a global access. Services should be interoperable. This is where LTE networks come to the rescue. The temporary industrial sites can set up a standalone LTE network and enjoy reliable and faster network connectivity for that facility.
The state of LTE Networks
Today, every cellular mobile device supports 4G LTE network. While AT&T and Verizon launched 4G LTE networks, others are quickly following them. According to ReportsandReports, the global investments in LTE networks accounted for $800 million in 2016. This value is expected to grow at a CAGR of 32% between 2016 and 2020. Persistence Market Research envisages that the global LTE networks market value would reach $926.1 billion by 2024, growing at a CAGR of 13.1% between 2016 and 2024. Similarly, Harbor Research reports that the private LTE network would reach $17 billion by 2022.
Public LTE Vs Private LTE
Enterprises can either choose public or private LTE networks. When it comes to a public LTE offering, the network is managed by the service provider. While it gives wide-area coverage, it is also shared by other customers. Public LTE networks provide general voice and data services.
Private LTE networks bring additional benefits to the table. Firstly, private LTE networks come with dedicated equipment and are managed locally, giving better control over data. Moreover, there are no sudden traffic surges. With the consistent performance, industrial facilities can manage productivity at predictable levels. Secondly, private LTE networks are easy to deploy and use for dedicated as well as virtual resource-based infrastructure. Thirdly, private LTE networks offer dedicated service which means high-quality service is guaranteed along with the ability to customize the network to suit industrial-specific needs.
Benefits of Private LTE Networks
- Support multiple devices with higher bandwidth
- Customizable for features such as mobile settings and quality of service
- Highly secure
- Better control over data
- Future proof as you can leverage the LTE roadmap
- Superior coverage for indoors and outdoors
- Seamless mobility
- Interoperability between suppliers
Private LTE Networks take the Industrial IoT to a new horizon
As the IoT networks increased, the need for higher speed and coverage, full mobility, latency and quality of service became a critical requirement for industrial facilities such as airports, ports, container ports, warehouses etc. Qualcomm has partnered with GE and Nokia to create a unified communication network system. The result is the first Private LTE-based network for industrial IoT. By integrating all the devices in the network and bringing them onto an industrial OS Predix platform, organizations are now able to incorporate latest technologies such as machine learning, analytics, artificial intelligence and predictive maintenance to significantly improve the efficiency of business processes and services. Nokia takes the automation to a new level by offering a private plug n play LTE ecosystem. The wireless communication is managed by Qualcomm. Multefire is an LTE deployment initiated by Qualcomm and Nokia. It is now managed by Multefire Alliance that was formed in December 2015. Similarly, Federal Communications Commission (FCC) in the US has defined Citizen Broadband Radio Service (CBRS) that offers 150 MHz shared spectrum band in 5.5 GHz band. So, organizations have two solutions to cost-effectively own and manage their private LTE networks without purchasing spectrum licenses.
As private LTE networks bring low latency and high-speed wireless connectivity, industries can add more sensors, analytics and automation devices to efficiently manage business-critical procedures such as handheld computing, AGV, head mounted display, industrial robots, edge computing and analytics, interactive maintenance, environmental sensors, drone inspection, turbine and blade sensors etc..
Private LTE Use cases
Manufacturing Vertical
In manufacturing industries, welding robots are commonly used for welding and handling automobile parts on the assembly line. Synchronization between a robot and the assembly line sensor is critical for precision of a process. A small delay in communication will result in the collision between the robots that in turn collapses the entire assembly line processes. For this reason, many industries are running hard-wired connections that are expensive. Moreover, wired connections occupy more footprint and clog the area as well. With private LTE networks, organizations can operate with wireless robots and thereby save time, money and footprint.
Mining Vertical
In a mining location, people have to work in harsh and remote environments. While sophisticated and automated solutions are available, organizations cannot use them owing to the lack of proper communication systems for real-time data for remote vehicles, systems, and the monitoring center. With a private LTE network, organizations can replace regular vehicles with driverless vehicles so that they can quickly reach out to remote areas and precisely apply drill blast procedures. The mining operations can be efficiently monitored from a centralized dashboard.
Healthcare Vertical
Healthcare is one of the important verticals that hosts a large number of IoT devices ranging from scanners and X-Rays to clinical and biometric devices. However, healthcare organizations use Wi-Fi only because they are not able to choose one mobile service provider’s ratio for all devices. Obtaining FDA certification for all clinical devices is cumbersome. However, the CBRS LTE networks now allow organizations to choose a spectrum that is independent of a mobile service provider. Moreover, one FDA certification would work for any private LTE spectrum.
Getting started with Internet of Things (IoT)
Introduction
Internet of Things (IoT) has been the latest buzz in IT circles in recent times. This disruptive technology is bringing digital, analog and mechanical devices under a single umbrella to simplify infrastructure management. IoT secures business processes, improves operational efficiencies, optimizes costs and allows for better decision making.
According to MarketsandMarkets, the market value of IoT reached $157.57 billion in 2017. This value is expected to touch $561.04 billion by 2022, growing at a CAGR of 26.9%. Juniper envisages that the total number of devices connected to the IoT networks would reach 42 billion by 2021. According to a McKinsey report, the IoT market that was valued at $900 million in 2015 is expected to touch $3.7 billion in 2020, growing at a CAGR of 32.6%.
An overview of Internet of Things
Internet of Things is a system or network of interconnected things, ranging from digital devices to mechanical devices and sensors, with an ability to transfer or exchange data without human or computer intervention. The transfer of data is done using unique identifiers. Simply put, IoT is a network of things. These things can be anything from an animal with a biochip, a human with a monitoring chip implant or a built-in sensor that monitors the temperature of a device.
In the tradition network setup, only computer devices were able to communicate with each other. With the advent of the IoT, computers, sensors and other mechanical devices are able to join any network and exchange data between them. Internet of Things is actually a combination of Microservices, wireless technologies and micro-electrical mechanical systems (MEMS). It was first mentioned by Kevin Ashton of MIT Auto-ID center in 1999. The advent of IoT has allowed people to analyze machine-generated data and better control business procedures.
How do IoT Systems work?
The IoT system comprises multiple devices or sensors, each assigned with a unique identifier. For instance, a logistics company runs a fleet of vehicles with a GPS system, a power plant uses a sensor to monitor the temperature of a boiler, a biochip implanted to an animal to check its temperature etc. The data collected from a device/sensor is automatically uploaded to the cloud or to a central location using a variety of communication protocols such as LAN, WAN, Wireless, cellular network, low-energy Bluetooth, NFC, RFID, ZigBee, Z-Wave etc.
The data that is stored in the cloud is processed using an analytics software. Based on the threshold values set by the user, the software monitors the data and automatically performs specified actions. For instance, if the boiler temperature is set to a threshold value and if it reaches that value, the software automatically turns off the boiler to avoid blasting. At the same time, it sends an alert notification to the concerned people. Using a centralized dashboard, you can remotely monitor and manage the IoT infrastructure with ease.
How do devices in an IoT network speak to each other?
When you send data from one device to another, the data is broken into smaller packets that travel through various routes to reach the destination. Once the data reaches the destination, the pieces of data are reassembled. An Internet Protocol is used for this messaging system. There are two types of internet protocols; Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). While UDP is quick and efficient, it doesn’t verify the receipt of the data. TCP actually checks if the data has arrived.
When it comes to the Internet of Things, there are multiple devices that vary in hardware, software, connectivity, application and messaging protocols. So, communication methods differ with each IoT system.
Common Communication Protocols for IoT systems
IoT Device to IoT Device
When you want an IoT device to communicate with another IoT device, Data Distribution Service (DDS) is a good option. It is a middleware protocol that uses the publish-subscribe model to receive and send data. DDS protocol eliminates complex network programming in distributed applications and offers highly scalable and reliable data connectivity. With message-centric middleware, developers have to write code for sending messages. However, data-centric middleware is aware of the data it stores. DDS is data-centric which
means it is best suited IoT networks, offering better control over data management. With a rich set of Quality of Service (QoS), DDS enables you to control how data is stored and distributed which makes it a better choice for industrial IoT as well as consumer IoT.
IoT Device to the Cloud
When you want to collect data from multiple IoT devices in a large network and send this data to the cloud for analysis, you need a system that can efficiently ingest massive amounts of data. For instance, a large shopping mall contains multiple HVAC devices. Each HVAC device reports metrics such as temperature, humidity, load, energy consumption, air quality and desired temperature etc., generating gigabytes of data in few seconds. To process, store, analyze and manage these massive datasets, you need a powerful messaging system. Apache Kafka is the answer.
Apache Kafka
Sensor devices cannot run the full stack of TCP protocols. So, they use an IoT gateway such as Zigbee or Z-Wave to aggregate datasets and route them to the corresponding data points. Apache Kafka clusters receive these datasets and send them to corresponding data points, for instance, metrics such as temperate and humidity needs real-time monitoring. So, they are sent to real-time processing solutions such as Apache Spark or Apache Storm. Similarly, metrics such as energy consumption and load are collected over time and analyzed using Hadoop data analytics systems. Apache Kafka acts as a publish-subscribe messaging system wherein massive amounts of datasets are ingested into the system.
MQTT
When you use multiple client apps that exchange real-time messages on various topics, Message Query Telemetry Transport (MQTT) is a good choice. Invented by Dr. Andy Stanford of IBM, MQTT is a lightweight protocol that uses the publish-subscribe messaging system. Working on top of the TCP/IP protocol, MQTT facilitates data communication even in low bandwidth networks. It uses the TCP/IP port 1883. Being a lightweight protocol, it allows devices with small code footprint to remotely send data to the cloud. MQTT is best suited for mobile devices wherein low power usage, small footprint and minimized data packets are involved.
CoAP
When you have resource-constrained devices such as wireless sensors nodes in the IoT network, Constrained Application Protocol (CoAP) is a good choice. This service-layer protocol enables constrained nodes to easily communicate with the regular nodes on the Internet. It can be easily integrated with the web as it easily translates to HTTP. CoAP is also used for sending SMS by communication networks.
RESTful API
As multiple devices are connected to an IoT network, developers should be able to access specific devices without knowing their protocols or intricacies. REST APIs enable you to expose them as interfaces so that you can write code for devices as you normally do for backend systems. RESTful APIs are best suited for IoT systems as they allow any device that can use HTTP to make its state available and standardize a way to manage data. So, developers can write a code wherein you check the state of a light bulb and turn it on/off. REST uses a unified architectural style which means a single app can use the same software written in different programming languages. With RESTful APIs, developers can focus on the application interactions instead on the applications themselves.
A common IoT platform is the need of the hour
As there are multiple devices involved in IoT networks, it is not easy to come up with a common protocol for device communication. There are two groups that are trying to create a common communication platform for IoT devices. The first one is Allseen Alliance that is backed by companies such as Qualcomm and Cisco. The second one is Open Interconnect Consortium that is backed by companies such as Intel and Samsung Electronics. Allseen Alliance released a software framework platform called the Alljoyn IoT platform recently. OIC came up with another framework platform called IoTivity. Looking at these efforts to implement standards in the IoT network, we can expect complete operability within the IoT in the days to come.
Run on AWS – How to better control cloud costs while improving operational efficiencies?
AWS offers an innovative cloud platform that enables organizations to quickly build great apps at reduced costs. However, organizations that move their infrastructure to the AWS cloud should ensure that the architecture is optimally designed for AWS to make the most out of the agility and elasticity offered by AWS. Failing to do so negates the benefits offered by the cloud. Moreover, failing to choose the right tools for the right procedures across the organization would rise up the operational costs in quick time. Here are certain things to consider while building applications in the AWS cloud.
Optimized resource usage
In a traditional environment, businesses operate with a fixed infrastructure. As the infrastructure is installed up front, expensive hardware might be idle at some times. AWS offers a pay-as-you-go service wherein you can provision resources based on changing business needs. In addition to the compute resources, you can provision storage, database, and other high-level application components. Scaling can be done in two ways; vertical scaling and horizontal scaling. For vertical scaling, you upgrade the configuration so that the system supports extra load. Horizontal scaling is where you add more components such as hard drives to an array.
In addition to scaling, you should consider stateless and stateful components. Stateless applications don’t store session information. It means the application provides the same information for any user at any time. For stateless components, you can add resources easily. Stateful components, on the other hand, store session information. Databases require stateful components. As a real-time example, e-commerce sites should store the user information so that they can offer customized prices. Similarly, most apps require sign-in from a user so that the personalized dashboard is offered to that user.
Choosing the right services
AWS offers more than 90 services. So, choosing the right services for right tasks is the key. For computing resources, you can choose Amazon Elastic Compute Cloud (Amazon EC2). Amazon Machine Image (AMI) can be used to recreate the configurational instances at any time. For storage purpose, you have Amazon Elastic Block Store (EBS). Snapshots of EBS are stored in Amazon S3. Amazon RDS enables you to store and manage data. Similarly, Amazon CloudFormation offers an on-demand IDE environment to develop code on the go. Amazon VPC is the virtual private cloud that allows you to securely extend your private network to the cloud. Organizations that process large volumes of data should go for a distributed processing system such as Amazon SQS, Amazon Kinesis. Apache Kafka is another option for processing streaming data. To reduce latency for global users, you can use Amazon CloudFront content delivery network.
Docker is a popular concept that allows developers to build and deploy applications inside a software container. Amazon offers AWS Elastic Beanstalk and Amazon ECS that allows you to build and deploy multiple Docker containers across multiple EC2 instances. Using Amazon CloudWatch, you can monitor and manage AWS cloud resources from a centralized dashboard.
Amazon offers four trusted advisor services at free of cost. These services allow you to monitor the performance, reliability, and security of your network. In addition, they help you in optimizing resource usage on AWS.
Securing your AWS infrastructure
Your AWS account is the key that opens up a whole new world of cloud networks. So, using the root account credentials for regular activities is not a good idea. Instead, you can create one or more IAM users who can interact with AWS for daily activities. Secondly, providing privileged based access to your AWS networks is recommended. You can distribute services among different groups and provide access to secure processes only to a defined range of IP addresses so that the outside traffic is denied access to those processes.
Having a proper backup and recovery plan is the key. Backup instances using EBS or a 3rd party backup tool and ensure that your recovery plan offers business continuity. Critical application components can be deployed across multiple time zones so that they can be replicated accordingly.
At the outset, AWS looks easy and pretty straightforward to use. However, without proper knowledge of the system, you can run into huge expenses. This is where CloudTern comes to the rescue. CloudTern AWS Managed Services provides customized software solutions tailored-made for your organization. With CloudTern Managed Services, you can better control cloud costs, improve operational efficiencies while securely processing your applications. Most importantly, you can concentrate on your core business processes while we take care of your cloud.
Create AWS AMI with custom SSH username and password
Introduction
EC2 instance that is launched with Amazon Linux AMI will come up with ec2-user and you can only SSH into that instance with Private Key.
Need
We wanted an Amazon Linux AMI (Base Image) with default username (similar to ec2-user) and that should allow SSH login with a password.
SSH login with a password is also a requirement for authenticating user login with OpenLDAP Server. That way our IT Operations need not remember new login information. They can use their existing logins.
Solution
- Launch Amazon Linux AMI micro instance.
- Connect to instance with private key
- Create SSH User and give sudo permission (similar to ec2-user)
- sudo useradd -s /bin/bash -m -d /home/<ssh-user-home-directory> -g root <ssh-user-name>
- sudo passwd <ssh-user-name>
- Enable Password login for SSH (add following snippet of code at the end of /etc/rc.local file)
if grep -Fxq “PasswordAuthentication no” /etc/ssh/sshd_config
then
#This instance launched for this first time, pleae enable SSH with password login
sed -i ‘s/^PasswordAuthentication.*/PasswordAuthentication yes/’ /etc/ssh/sshd_config
/etc/init.d/sshd restart
fi
This piece of code will change PasswordAuthentication to yes in sshd_config file. (If we don’t do this every time we create Instance with AMI that will overwrite sshd_config file)
- Stop the instance
- Select Instance and create Image
- Now launch Instance with above created AMI.
- After Instance is launched you can log in with ssh username you have created in step 3.
Path to the AWS Cloud
Introduction : Path to the AWS Cloud
You’ve heard of software as a service (SaaS), and Infrastructure as a Service, Platform as a Service, There is XaaS to describe Anything as a Service. Now you can provide all of your company’s functions “as a Service” – Your Company as a Service (YCaaS). You will be more scalable, more available, more connected to employees and customers, as well as suppliers. Just hop on this cloud…
This blog is written to simplify your trip to the cloud. It is written as a general-purpose document and specific details will vary with your needs. This guide is written for migration to the AWS Cloud Platform. You will need an AWS account to begin this migration. The result will be a very flexible and highly available platform that will host services for internal or external use. Services may be turned up or discontinued, temporarily or permanently, very easily. Services may be scaled up or down automatically to meet demands. Because AWS services are billed as a service, computing services become operational rather than a capital expense (CAPEX).
The Framework
Exact needs will vary based on the services being migrated to the AWS Cloud. The benefits of a structured, reliable framework will transform your organization’s approach to planning and be offering online services. The AWS CAF (Cloud Adoption Framework) offers a structure for developing efficient and effective plans for cloud migration. With guidance and best practices available within that framework, you will build a comprehensive approach to cloud computing across your organization.
Planning
Using the Framework (AWS CAF) to break down complicated plans into simple areas of focus, will speed the migration and improve success. People, process, and technology are represented at the top level. The components of the focus areas include:
- Value (ROI)
- People (Roles)
- Priority and Control
- Applications and Infrastructure
- Risk and Compliance
- Operations
Value, or Return on Investment, measure the monetary impact on your business. For customer facing services, this could mean reaching more customers faster. Customer engagement and meaningful transactions. For internal services, ease of access and pertinence of content adds value.
People occupy many roles. Organizationally, internal stakeholders will need to be involved in decision making and in ongoing support. Business applications’ stakeholders have outcomes which they own in the planning stages and in the long term utilization. The content provider will have initial and ongoing responsibilities. The end user is dependent on the platform and the other stakeholders.
Priority and control of the service are defined with the resources dedicated to the service migration and allowable disruption. Priorities are affected by readiness. New services are often easier to migrate due to the compatibility of platforms. These may be migrated quickly ahead of more cumbersome services. Mission critical services will require the resources and special attention that goes with critical status.
Risk and compliance are defined by the category of the usage of the service. Commerce with external entities will demand PCI compliance. Personal information of internal entities will demand HIPPA compliance. CRM and general information will need copyright identification.
Operations are involved in the migration phase as the process of service migration affects business operations. Because migration is not a day to day business process, it will require its own resources, planning, and priorities. These priorities affect the resources available for the migration. A fast migration may require more resources, people, bandwidth, communications. Lower priority allows for fewer resources and, typically, less disruption.
Migration process
Migration is a process that will ride on top of the normal business process. In order to successfully migrate to the cloud, all of these considerations will affect planning. Given priorities that are decided upon, identify the people and roles that will be involved in the migration. Communicate the specific outcomes the team will be responsible for. Be specific, gain agreement and ownership. Deliver the resources that the team identifies as needed to meet goals. This includes time. If the team has to be away from normal day to day responsibilities business process must be temporarily re-routed. This will involve support teams one level removed from the migration.
Outsourced teams can provide temporary resources in highly specialized roles to reduce the impact on business operations. Do the initial planning to determine your needs. Choose an outsourced team based on experience in the specific roles you will need to fill. Integrate the imported resources with appropriate internal team members. Give ownership to the internal team and empower them to act when needs arise.
Construct the entire migration model before beginning the process. Build the budget and prepare for the impact of resource dedication up front. Measure progress against the model on weekly basis. Communicate to the team that adjustments will be needed, and communication is the way these adjustments are dealt with. Remember the butterfly effect: every change will result in cascading consequences. With reliable communications, everyone will be more comfortable with the temporary effects of this over the top process.
When the team and their roles are communicated, the non-human resources can be quantified. How much bandwidth will be required to meet identified goals? Is the network capable of delivering on the required bandwidth, or will infrastructure need to be upgraded? Consider the impact on infrastructure on critical business services that may occur during the migration. Be prepared for contingencies and unexpected demands.
If network augmentation is required, how deep into your infrastructure will you need to adjust. As data migration paths are identified and bandwidth is dedicated, will other segments of the network be affected? These network augmentations have power and space impacts. Downstream, there will be additional people affected as configurations and replacement equipment are implemented.
Peak demand capacity is often a separate planning impact. Peak busy hours will result in oversubscription of available bandwidth. With oversubscription, will come service impact. The impact is easily underestimated because saturation will lengthen the impact duration. Along with the capacity planning, there needs to be service level consideration. What tolerance to latency will the user base have?
Availability planning during migration will determine impact in the event of the disaster. Business continuity plans may need to be modified during the migration period. Existing failover functions will not include the migration paths. If not addressed in advance, an unplanned outage will disrupt your migration and likely have a negative business impact. Whatever availabilities are associated with your services which are migrating will need planning for the migration.
The cost of maintaining duplicate services during migration include licensing. When two systems are running simultaneously, the license expense is double. Depending on demand, and with planning, some efficiencies may keep this cost under the maximum. While this may be an opportunity to eliminate some marginally needed or legacy expenses.
In the long run, you will reap the rewards. Savings include the server maintenance, break-fix, and upgrades, backups, both local and off-site, environmental conditioning maintenance, power savings. People time involved with the maintenance, break-fix, upgrades, and the bill paying for these services. Importantly, scalability in the AWS cloud does not require as much advanced planning, over capacity implementation and over provisioning for future expansion. Capacity can be reduced on the fly as well.
The total return on investment will include a cost increase during planning and migration and long-term savings due to increased efficiencies and cost reductions. The total cost of ownership grows over time, but will not include associated direct and indirect costs. Intangible return is in technology upgrades. The obsoleting of capital investments will greatly decrease. Technology will evolve and be implemented invisibly for immediate use in the cloud platform.
Contributors
William