Author: CloudTern

CloudTern-5G-Telecom Telecom

Does 5G Technology Cause Corona Virus or other health…

Corona Virus has become the latest buzz everywhere in recent times. Be it news, whatsapp messages, facebook posts or phone calls, all you hear is mostly about this pandemic disease. However, not every Corona-related post is true. Recently, people burnt the 5G cell towers in UK amid the fear of 5G radio waves causing corona virus. According to BBC News, three 5G tower masts were burnt down in Melling, Liverpool and Birmigham in the first week of April 2020. As these fake posts are virally forwarded, it is important to know if 5G technology causes Corona or any other health issues.

An Overview of 5G Technology

5G technology is the next generation wireless standard that is redefining the way devices connect and communicate over the network. This fifth generation technology offers high speed internet connectivity of up to 10-20 Gigabits per second with a low latency of 1 millisecond. This is possible because 5G operates across multiple bandwidths to leverage and expand spectrum resources. It operates in both 5G New Radio (NR) sub -6 GHz and 5G NR mmWave (24 GHz – 100 GHz) of the electromagnetic spectrum.

Orthogonal Frequency-Division Multiplexing (OFDM) is the method used by 5G technology wherein it reduces interference with digital signal modulation across multi channels. As such, 5G offers higher internet speeds and accommodates 100x more traffic with greater network efficiency along with a 10x lower latency when compared with 4G. It means you can watch high definition streaming videos, use AR/VR apps and play multi-player HD cloud games in real time.

The availability of multiple sensors and low latency means that you can efficiently monitor operations at a remote industrial plant, assist a surgeon in operation, control logistics, create design and manage smart cities, autonomous cars etc. The list is endless. It allows network slicing wherein you can create subnets and categorize data based on your priorities. While earlier generation wireless standards such as 2G, 3G and 4G operated in the 2.4 GHz – 2.6 GHz spectrum, the new 5G technologies uses both the 2.4 GHz and the 5 GHz spectrum. This broader range of spectrum gives the scalability to accommodate a variety of devices operating at high speeds.

5G and Corona Virus

As 5G operates in a higher spectrum when compared with earlier wireless standards, there has been several theories about the health implications of 5G on humans. Corona Virus is actually named as Severe Acute Respiratory Syndrome Corona Virus 2 (SARS – CoV 2). It is also called New Corona Virus 19 (nCOVID-19). As the name says, nCOVID-19 is a biological virus that spreads from droplets transmitted by humans when they sneeze or cough to nearby people. Once the virus gets into contact with a human, they quickly move into the nasal passage and mucous membrane. These virus particles come with a thronged protein particles on their surfaces that get hooked to human cell membranes. As the genetic material of the virus enter the cell membrane, it will hijack the cell and multiple itself to exponentially reproduce the virus.

On the other hand, 5G technology uses radio waves that have less energy than visible light. While 5G is a physical component, corona virus is a biological component. So, they are two different worlds altogether. As 5G waves cannot carry the virus, it cannot spread the virus. The International Commission On Non-Ionizing Radiation Protection (ICNIRP), the scientific body that sets guidelines for EMF radiation, gave an official statement that 5G technology doesn’t cause Corona Virus.

The Effect of 5G Radiation on Health

The term ‘radiation’ is always frightening as it reminds everyone of nuclear radiation, ultra violet radiation, and gamma radiation. However, 5G technology radiation is not harmful. The electromagnetic spectrum is actually divided into two portions:

  1. Non-ionising radiation:  In this type of radiation, the energy per quantum is not enough to ionize molecules or atoms. In the electromagnetic spectrum, 700 MHz to 750 THz is considered as non-ionizing radiation portion.
  2. Ionizing Radiation: The energy of the radio waves here have the ability to ionize an atom or a molecule. As the frequency is high, the energy is high too. Gamma rays, X-Rays, UV Rays fall into this category

5G technology operates within the 2.4 GHz – 5GHz. The possible future 5G air waves are expected to operate between 26 GHz and 66 GHz. As such, the radiation has less energy than light. So, it is not harmful to human health. The ICNIRP body regulates the maximum EMF radiation levels and sets guidelines. As per these guidelines, the current 5G emissions are very minimal. While there are a few scientists who claim that 5G radiation is harmful to human health, they haven’t been able to provide minimal scientific evidence to prove their point.

CloudTern Work from Home Uncategorized

Six-Point Formula to Maintain Business Continuity in these Coronavirus…

As the world entered into the year 2020, little did anyone expect what was coming in the New Year. Started off in Wuhan city in China, the new Corona Virus hit the world, creating havoc across the globe. By the time anyone can assess the intensity of this pandemic, nCOVID-19 already claimed thousands of lives. According to the Worldometer, the total Corona virus cases stand at 3,221,029, claiming 228,252 lives, as of 30th April, 2020. Owing to the lack of a vaccine for this dreadful virus, every country has imposed lockdown to break the virus chain.

The Corona virus and the subsequent lockdown has brought a sudden change to the human lifestyle. Being at home, the only respite for people is their mobile phone. Whether you want to watch a movie, play a game, perform financial transactions or watch latest news, smartphones have become the first option. As such, telecom operators are experiencing a huge surge in the network traffic.

To maintain business continuity, organizations have moved to a work-from-home environment. However, the work-from-home approach brings new headaches to organizations in the form of cyber threats. With minimal protection, smart phones become vulnerable for a variety of cyber threats.

At CloudTern, we are equally concerned about your network security and business continuity as we care about your IT solutions!

Here is a six-point formula to maintain business continuity in these pandemic times.

1) End-user Security

  1. Educate your Employees: While most businesses focus on securing IT networks, it is equally important to educate your employees about the dangers of cyber threats and train them on how to securely access networks while keeping their devices secure.
  2. Security Solutions: It is important to install the latest security solutions suite on the laptops, smartphones or tablets employees use to access business networks from home. If you already have one, it should always be up to date. It should include anti-virus, firewall and device encryption.
  3. Auto Update Apps: All the apps installed on the device should be up to date. You can turn on auto update so that it automatically installs the updates.
  4. Secure Wi-Fi Passwords: It is highly critical to secure your Wi-Fi connection. Create a strong password and frequently change it. Make sure it uses the WPA2 encryption.
  5. Use secured systems: Always be sure to log into your account from a secured device. Don’t just turn on your home device that your children use. Log out of your account when you are not working. Say a strict no to accessing a network from a public network.
  6. Be alert with Phishing scams: Don’t reply to any emails or SMS with your login credentials. At the same time, be careful while clicking links in emails. Make sure that the site is encrypted when sharing financial information.

2) Business Network Security

As end users access business networks from different locations and devices, it is important to secure the business end as well.

  1. Security solutions in place: A work from home network implies that a variety of computing devices will be accessing business networks. As such, you should take extra care to secure your networks. Installing the latest security suite with anti-virus, firewalls, encryption methods etc. is the key. In addition, ensure that IAM systems are in place.
  2. Virtual Private networks: Using a secure virtual private network is highly recommended. A VPN service will encrypt the data, securely transmits it through the VPN tunnel and decrypts it at the receiver’s end. It also masks the user IP address and the location. As such, hackers will find it tough to identify user details or access the data. If your organization already uses a VPN service, check if you have the available volume to accommodate all users. For small and medium businesses that doesn’t use a VPN, it is important to choose the right VPN service. Avoiding the free ones is recommended.
  3. Multi-factor authentication: As end users login from multiple devices, implementing a multi-factor authentication system is recommended. A multi-factor authentication method adds an extra security layer on top of your login credentials. When you enter the username and password, you also need to provide an OTP number or a MFA device such as a virtual MFA, real MFA device, U2F security key, display card device etc.
  4. Data Backup and Recovery: During this lockdown period, chances are that your network might get into trouble. As such, make sure that data is backed up in regular intervals and critical data is secured with extra care. Update disaster recovery programs as well.

3) Business Continuity – Moving down the Centralised Hierarchical Structure

While securing the networks is the first concern, it is equally important to keep your business processes up and running. According to S & P Global ratings, the coronavirus impact is going to bring down global growth rate to 0.4% in 2020 and this value is expected to rebound to 4.9% in 2021. The supply chain disruptions across the globe are significantly impacting B2C as well as B2B organizations to a greater extent. As such, efficiently managing your workforce is the key. Centralized hierarchical system might have worked well in a structured environment. However, the lockdown that resulted in physical distance means that you’ll have to trust your sub-ordinates. So, decentralize your work force and assign temporary roles to every one with a central monitoring system to maintain business continuity.

4) Keeping in Touch

The Coronavirus lockdown has disrupted the business structure but not the business communications. It is important to keep in touch with all your customers and all stakeholders of the organization. Try to send at least an email or an SMS once in a week enquiring about their health and safety and offer tips on how to better manage their time. For instance, a healthcare institution can offer safety measures to avoid Corona and tips to boost immunity levels. A travel agency can offer a virtual tour of an exotic location or a top 10 list of best movies to watch in that week. This lock down is great time to build the right rapport and trust with your customers.

5) Create New Opportunities

When the going gets tough, the tough gets going” – John Thomas

During these pandemic times, most people are worried about their business. However, the tough not only keep their business up and running but they also create new opportunities out of it. For instance, the healthcare industry is working hard to provide proper care to patients. At the same time, healthcare institutions can take advantage of healthcare bots to offer self-service option wherein a digital care assistant can provide the required medical information, support and guidance to its patients. In addition to patient engagement, bots can route calls to the right healthcare professional for an instant a better care.

OneRemission is a chatbot that interacts with cancer patients and helps them with their diet, exercises and post-cancer lifestyle.  Ada Health is another software bot that uses AI to assess user health based on the indicated symptoms. Organizations with low business rates can take this time to create software bots to better engage their customers and increase sales.

6) Post-Lockdown Planning

The constant rise in new corona cases across the globe makes it uncertain to predict when the business world will return to its normalcy. Most businesses are hugely impacted by the pandemic which is sure to affect the running projects, the client base, finances, workforce etc. As businesses have gone through both worlds, it is now important to have a post-lockdown business plan in place as the business will not be the same for the coming 1-2 years.

Organizations that quickly realign their strategies to meet changing business environments are sure to stay in the competition in the coming days!

Events

Heading to MWC 2018

The most awaited event in the Telecom/Mobile industry is just around the corner!

Where: Fira Gran Via, Barcelona, Spain

When: 26 Feb – 1 Mar, 2018

We at CloudTern are happy to announce our participation at the Mobile World Congress this year in Barcelona. Our CEO will be there to discuss with you about “Churn Prediction using Machine Learning“, “IoT Solution – Device Control and Monitoring” and “Batch and Real-Time Data Streaming/Processing with Apache Kafka and Apache Storm”.

For better communication, start booking a meeting with us. Please send an email to ramu@cloudtern.com

AWS

Laravel project setup in AWS

Below are the steps to set up Laravel project in AWS instance.

  • Login to the AWS instance.
  • sudo yum update
  • sudo yum install httpd24 php56 php56-pdo php56-mbstring php56-mcrypt php56-mysqlnd
  • sudo curl -sS https://getcomposer.org/installer | php
  • sudo mv composer.phar /usr/local/bin/composer
  • sudo yum install git
  • cd /var/www/html
  • sudo git clone https://username@example.com/path/to/repository.git
  • Rename the cloned repository/project directory if required.
  • cd project-name
  • sudo vi .env
  • Change the MySQL connection details.
  • php artisan config:cache
  • cd /etc/httpd/conf
  • sudo vi httpd.conf
  • Insert below commands

          <VirtualHost *:80>

               ServerName www.example.com

               DocumentRoot /var/www/html/project-name/public

               <Directory /var/www/html/project-name/>

                    AllowOverride All

               </Directory>

          </VirtualHost>

  • sudo service httpd start
AWS

How to import VM image to AWS

One of the coolest features I like about AWS is it not only gives you the powerful images through AMI but also allows you to import your VM images running in your data center as well. In this, I would like to show you how simple it is to import the VM image into the AWS

The prerequisites for VM import are

For S3 Bucket I am assuming name “my-vm-imports”

Creating IAM Role

You cannot create using the AWS management console. You have to follow the aws- only

  1. create a trust policy trust-policy.json
{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Principal": { "Service": "vmie.amazonaws.com" },

         "Action": "sts:AssumeRole",

         "Condition": {

            "StringEquals":{

               "sts:Externalid": "vmimport"

            }

         }

      }

   ]

}

2. Using aws command line create a role vmimport

aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json

3. Create a file named role-policy.json with the following policy

{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Action": [

            "s3:ListBucket",

            "s3:GetBucketLocation",

            "s3:FullAccess"

         ],

         "Resource": [

            "arn:aws:s3:::my-vm-imports"

         ]

      },

      {

         "Effect": "Allow",

         "Action": [

            "s3:GetObject"

         ],

         "Resource": [

            "arn:aws:s3:::my-vm-imports/*"

         ]

      },

      {

         "Effect": "Allow",

         "Action":[

            "ec2:ModifySnapshotAttribute",

            "ec2:CopySnapshot",

            "ec2:RegisterImage",

            "ec2:Describe*",

            "ec2:FullAccess"

         ],

         "Resource": "*"

      }

   ]

}


4. Use the following command “put-role-policy” to the role we created before.

aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

Next steps :

  1. Upload the VM image to S3
aws s3 cp file_path s3://my-vm-imports

2. Create a container file which contains the s3 bucket name, format, description and key name in the s3 bucket. Save this file as JSON

[

  {

    "Description": “My VM",

    "Format": "ova",

    "UserBucket": {

        "S3Bucket": “my-vm-imports",

        "S3Key": "my-vm-imports/myVm.ova"

    }

}]



Note: Only OVA,VMDK image formats are supported in AWS

4. Finally, import the image from S3 with import-image command. After that, your image(AMI) will be ready for use

aws ec2 import-image —description “Linux or Window VM” —-disk-containers file://container.json

Thanks for Reading.

Best Regards

CloudTern

AWS

Hybrid Cloud Architecture with CISCO CSR 1000v

Cisco CSR 1000v series is a router software appliance from Cisco. It provides enterprise routing, VPN, Firewall, IP SLA, and more.CSR 1000v can be used to connect multiple VPC across all-region in AWS Cloud and on-premise networks. Thus it can be used avoid managed VPN service from AWS.

In AWS, you can find Cisco CSR 1000v in AWS marketplace which has 30 days free trial to test it out. AWS Marketplace for Cisco. Be aware this is not cheap, it will cost you EC2 Instance charges. All instance types are not supported for CSR 1000v. It supports only m3 and c3 instance family types.

Cisco CSR 1000v Can be used in various network models in cloud like Transit VPC, multi-cloud Network.

Following is the Architecture I have used to connect multiple VPC.

 

The two VPC’s are one in N.Virginia region and other is in Ohio Region. And Each VPC has Internet Gateway and were connected over VPN. On Ohio region, we used AWS managed VPN service to connect VPC in N.Virginia region VPC. And On-Premise Edge Router we used Cisco RV110W small business router. In this Post, I would like to mention the steps to follow to establish VPN over two VPC’s spread in two different regions in AWS.

Steps to create VPC’s in two regions:

  1. Create VPC in N.Virginia Region with CIDR 10.0.0.0/16 and attach Internet Gateway to it. you can do it from CLI or through the management console.
    aws ec2 create-vpc --cidr-block 10.0.0.0/16 --region us-east-1
    
    Output:
    {
                "Vpc": {
                    "VpcId": "vpc-848344fd",
                    "InstanceTenancy": "dedicated",
                    "Tags": [],
                    "CidrBlockAssociations": [
                        {
                            "AssociationId": "vpc-cidr-assoc-8c4fb8e7",
                            "CidrBlock": "10.0.0.0/16",
                            "CidrBlockState": {
                                "State": "associated"
                            }
                        }
                    ],
                    "Ipv6CidrBlockAssociationSet": [],
                    "State": "pending",
                    "DhcpOptionsId": "dopt-38f7a057",
                    "CidrBlock": "10.0.0.0/16",
                    "IsDefault": false
                }
              }
    
    
    aws ec2 create-internet-gateway --region us-east-1
    
    Output:
    
              {
                  "InternetGateway": {
                      "Tags": [],
                      "InternetGatewayId": "igw-c0a643a9",
                      "Attachments": []
                  }
              }
    
    
    
    aws ec2 attach-internet-gateway --gateway-id <<IGW-ID>> --vpc-id <<VPC-ID>> --region us-east-1
  2. Create two subnets in N.Virginia Region VPC, one for CSR 1000v with CIDR 10.0.0.0/24 and another subnet with CIDR 10.0.1.0/24.
    aws ec2 create-subnet --cidr-block 10.0.0.0/24 --vpc-id <<VPC-ID>> --region us-east-1
    
    Output:
    {
                  "Subnet": {
                    "VpcId": "vpc-a01106c2",
                    "AvailableIpAddressCount": 251,
                    "MapPublicIpOnLaunch": false,
                    "DefaultForAz": false,
                    "Ipv6CidrBlockAssociationSet": [],
                    "State": "pending",
                    "AvailabilityZone": "us-east-1a",
                    "SubnetId": "subnet-2c2de375",
                    "CidrBlock": "10.0.0.0/24",
                    "AssignIpv6AddressOnCreation": false
                  }
              }
    
    aws ec2 create-subnet --cidr-block 10.0.1.0/24 --vpc-id <<VPC-ID>> --region us-east-1
    
    Output:
    {
                  "Subnet": {
                    "VpcId": "vpc-a01106c2",
                    "AvailableIpAddressCount": 251,
                    "MapPublicIpOnLaunch": false,
                    "DefaultForAz": false,
                    "Ipv6CidrBlockAssociationSet": [],
                    "State": "pending",
                    "AvailabilityZone": "us-east-1b",
                    "SubnetId": "subnet-2c2de375",
                    "CidrBlock": "10.0.1.0/24",
                    "AssignIpv6AddressOnCreation": false
                  }
              }
  3. Create Route Table in N.Virginia VPC which will have the default route to Internet Gateway.And associate CSR subnet to it.

4. Launch the CSR 1000v from AWS MarketPlace with the one-click launch. Link To AWS Marketplace, you can ssh into the CSR 1000v instance using ec2-user.Attach Elastic IP to the CSR instance which will act as Customer Gateway in N.Virginia Region VPC. In later steps, we will configure the router to add Static routes to other subnets in VPC and setting BGP to propagate routes over VPN Connection with other VPC.

5. In a similar fashion create VPC in AWS Ohio region with CIDR 10.1.0.0/16 And create two subnets with CIDR 10.1.0.0/24 and 10.1.1.0/24

Steps to Create VPN connection in AWS Ohio VPC

  1. Create Customer Gateway. Open VPC management console at console.aws.amazon.com. In navigation pane choose Customer Gateway and then create new Customer Gateway. Enter Name, Routing type as Dynamic and EIP of the CSR 1000v instance in N.Viriginia Region VPC. ASN number is 16-bit and must be in the range of 64512 to 65534.
  2. Create VPG and attach to the VPC.In the Navigation Pane choose Virtual Private Gateway and create VPG.
  3.  Now Create VPN connection. In Navigation Pane Choose VPN Connection, Create New VPN Connection. Enter the Name, VPG and Customer Gateway which we have created previously, select routing type as Dynamic and create VPN connection.

It will take few minutes to create VPN connection. When it is ready to download the configuration for Cisco CSR from the drop-down menu.

Steps to establish VPN Connection on CSR 1000v

  1. Add static routes of other subnets in VPC(N.Virginia) to CSR 1000v. Every subnet in AWS has a virtual router with IP address of Subnet CIDR +1. As CSR router will be in Subnet 10.0.0.0/24  the virtual router IP address will be 10.0.0.1. The Virtual Router on each subnet has a route to other all subnets in the VPC.
    >Configure terminal
    #ip route 10.0.1.0 255.255.255.0 10.0.0.1
  2. Configure BGP. Choose the ASN number which you gave while creating Customer Gateway in Ohio VPC. Above we gave 64512
    > Configure terminal
    (config)#router bgp 64512
    (Config-router)# timers bgp keepalive holdtime
    (Config-router)# bgp log-neighbor-changes
    (Config-router)# end

    This step might not be necessary. But as good practice, I have applied the above configuration before copying the configuration file that is downloaded before.

  3. Apply the Configurations that are downloaded previously when VPN Connections Created. After you have applied those setting on CSR you can see on the management console that both the tunnels of VPN as UP.

Testing to check connectivity between two VPC’s

  1. Launch an instance in subnet1 in Ohio region VPC’s with Public IPv4. SSH into the instance and ping the CSR 1000v instance private IP.
  2. Similarly, you can check connectivity with Ohio Region VPC by pinging the instance in subnet1 in Ohio region VPC with its Private IP.

Troubleshooting :

> Route Propagation must be added to the route table in Ohio Region VPC.

> You must configure CSR 1000v as NAT, so the subnets in N.Virginia region can access the hosts in Ohio region VPC via CSR 1000v. You need to Update the route table with target fo CSR 1000v instance-id after making it as NAT.

> Allow ICMP in Security groups on all instances.

Thanks and Regards

Naveen

AWS Solution Architect @CloudTern

AWS

VPC Design Principles

Virtual Private Cloud(VPC) creation is the first step in building your infrastructure in AWS Cloud. AWS gave the flexibility to create VPC based on RFC4632 . Major Components of VPC : VPC CIDR, Subnets, Route Table, ACL and Security Groups. The VPC creation is a straightforward method just grab a CIDR based on RFC4632  but subnetting the VPC can consider the  following principles.

Creation of Subnets:

Primary reasons to create Subnets

  1. You need hosts to be routed successfully.(Private facing or Public facing)
  2. Want to distribute Workload across multiple AZ’s( Availability Zones) for fault tolerance.
  3. Create Subnets for hosts that require additional layer of  Security using ACL ( Access Control List)

Subnet the network into smaller networks which can be considered as  Public Subnets, Private or VPN only subnets. These networks are supernets and not the actual subnets we create. Then subnet each supernet into smaller networks which you fit your hosts into it.

Note* : AWS reserves 5 IPs when you create a subnets. So more subnets you create more ips you will lose. For example for subnet 10.0.0.0/27 following IP’s are resolved

  1. 10.0.0.0 network address
  2. 10.0.0.1 Virtual Router address
  3. 10.0.0.2 DNS address
  4. 10.0.0.3 Reserved by AWS for future use.
  5. 10.0.0.31 Broadcast address

Route Tables

All the hosts within VPC can be routed to other hosts in the VPC using an implicit virtual router . A Default Virtual Router would be created when you create the subnet. For example a subnet with CIDR 10.0.0.0/27 will have Virtual Router with IP 10.0.0.1 ( Subnet CIDR + 1). This Router will utilize the route table entries of the subnet associated with.

Each Subnet should be associated with a Route Table for traffic to flow.If a subnet is not associated to any route table, it will use the default Main Route Table. Route Table can be associated with multiple subnets.

  1. Create Route Tables for Subnets that need different Routing requirements(Public facing or Private facing).
  2. Create Route Table for subnets that require more specific routing. For example a subnet may be needed to allow traffic only from a pool of IP address.

Access Control List(ACL)

ACL Provide security at Subnet Level. You can control what traffic to flow in and out of a subnet. ACL are stateful, i.e you have to define both ingress and outgress traffic in the rule list.

You can find more at ACL Overview

Create ACL if you want restrict any traffic to flow to the hosts in the subnets.

Network Address Translator (NAT)

A NAT is used to provide Outbound internet to the hosts inside Private Subnets. Route Tables for Private Subnets has to updated with logical id of NAT to provide Outbound Internet Connectivity to hosts inside private Subnet.

Based on the above principles ,a Concrete Example for  Creating VPC in Practice is below

  1. Subnet the VPC CIDR to Public facing or Private facing Subnets.
  2. All Private facing subnets would be associated with a single Route Table, and  ACL. The same would be applied for VPN Subnets and Public Subnets with different Route Tables and ACL
  3. Create a Subnet if more security is needed at subnet level using ACL and associate the subnet to Route Table.

The following figure shows the summary of VPC Design in AWS

AWS

Cloud-Init Cheat Sheet

Cloud-init is a multi-distribution package that handles early initialization of cloud instances. Some of the things cloud-init can do are

  • set up the hostname.
  • setting up a local user.
  • Updating and installing the packages on Linux.
  • Disk setup and mounting the additional volumes.

More information can be found at Cloud-Init

Today most of the distribution support cloud-init and using cloud-init can run all cloud providers(AWS, Azure, GCP). In this article, I want to show some of the code-snippets I have tried in AWS with different distributions mainly Ubuntu and Amazon Linux.

#To create hostname :

#cloud-config
#set the hostmachine name
fqdn: myhostname.com

This will set the hostname of the deployed instance to myhostname.com. But the default AMI from Amazon Linux does not support changing the hostname. You need to launch the instance and change the preserve_hostname to false in etc/cloud/cloud.cfg.Then you need to build an image from that instance and launch a new instance from the build image with the above cloud-config script to change the hostname.

#To add additional users to the instance

#cloud-config
users:
 - name: bob
   sudo: ALL=(ALL) NOPASSWD: ALL
   groups: admin, root

 

This will add user bob to the instance along with the default user created by distro’s like for Amazon Linux default user ec2-user and for Ubuntu default user is ubuntu.

#To update packages and install new ones:

#cloud-config
package_update:true
packages:
 - pwgen
 - nginx

The above will update the distro package system and install the pwgen and nginx packages.

#Disk Setup and mount the additional EBS volumes.

#cloud-config
# - /dev can ommited for device names starting with xvd, sd, hd, vd
# if device does not exist at the time, an entry will still be written to /etc/fstab
mounts:
- [xvdb, /data,"auto","defaults,nofail", "0", "0"]

#setup the file system on the device
fs_setup:
 - label: data
   filesystem: 'ext4'
   device: '/dev/xvdb'
   partition: auto
runcmd:
 - mkdir /data

This will basically set up the disk to filesystem ext4 and add the mounting device to /etc/fstab in Linux.

Thanks for viewing.

Best Regards

Naveen

AWS

Custom AMI with Custom hostname

I am using Amazon web services for a while now. And using it allowed me to have hands dirty on various services. In AWS AMI’s(Amazon Machine Image) provides the information like operating system, application server, and applications to launch into the virtual server(or can be called as an instance) in the cloud. There are lots of options for selecting AMIs provided by AWS or by the community. You can choose the preferred AMI that can meet your requirements. You can customize the instance that you have launched from the AMIs provided by AWS and can create your own AMI from that.All the AMIs created by you are private by default.

Interestingly the instances launched with Public AMIs in AWS comes with default user-name and no password authenticated which sometimes I don’t like. For example, Instances launched with Amazon Linux will have default user-name ec2-user and for Ubuntu instance default user-name is Ubuntu.

Instance launched with Public AMIs also does not allow you change the hostname on flight using user-data. Hostname for any instance launched with Public AMI looks something like

ip-<Private-IPv4>

Example: ip-172-1-20-201

So I have decided to create an AMI which will have default user as Naveen and password as *****. And I would like to have my instance named as myhostname.com i.e hostname. I will use a cloud config script to do that.

cloud-init is a multi-distribution package that handles early initialization of cloud instances.More information can be found at Cloud-Init. Some of the tasks performed by cloud-init are

  • Set hostname
  • Set the default Locale (default user)
  • Generate host private ssh keys
  • Parse and handle user-data

Custom AMI

For creating my Custom AMI with above-mentioned changes I have followed the below steps:

1. I have launched a t2.micro instance with Amazon Linux AMI ‘ami-4fffc834’. You can launch the instance using AWS management console or be using AWS command line(aws-cli). I have used the aws-cli to launch the instance.

aws ec2 run-instances --image-id ami-4fffc834 --count 1 --instance-type t2.micro --key-name Naveen

The above command will launch one t2.micro instance with the key name ‘Naveen’.

2.  As I have launched the instance using Amazon Linux, the default user-name is ec2-user. Amazon Linux does setting default user using cloud-init. The configuration file for setting default user can be found in /etc/cloud/cloud.cfg.d/00_default.cfg. The config file looks something like below

system_info:
# This will affect which distro class gets used
distro: amazon
distro_short: amzn
# Default user name + that default users groups (if added/used)
default_user:
name: ec2-user
lock_passwd: true
gecos: EC2 Default User
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [ i386, x86_64 ]
search:
regional:
- repo.%(ec2_region)s.%(services_domain)s
- repo.%(ec2_region)s.amazonaws.com
ssh_svcname: sshd

 

The 00_default.cfg contains other things as well but I have posted only the one which needed to be changed. As we can see the default username for this distro is ec2-user. lock_passwd: true means the user who is trying to log in with the username ec2-user is not allowed to authenticate using a password.

3. I have changed the user-name to Naveen and lock_passwd: false in the config file. But this config file does not allow entering the normal password as part of the config file. You need to give the password for the user in the hash. So to do that I have used the following commands in Ubuntu machine

# mkpasswd comes with whois package
 sudo ap-get install whois

#To Generate hash using mkpasswd mkpasswd –method=SHA-512 #This will prompt to enter password #After entering password, mkpasswd will generate hash and output on console Ex:  $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7Ellswerdf.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf

Copy the above-generated hash and add it to ‘passwd’ key in the above config file. After making final changes in the config file

system_info:   # This will affect which distro class gets used   distro: amazon   distro_short: amzn   # Default user name + that default users groups (if added/used)   default_user:     name: Naveen     lock_passwd: false     passwd: $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7Elwerfwq.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf1lXQYJtMtQLpvAqE1     gecos: Modified Default User name     groups: [ wheel ]     sudo: [ “ALL=(ALL:ALL) ALL” ]     shell: /bin/bash   # Other config here will be given to the distro class and/or path classes   paths:     cloud_dir: /var/lib/cloud/     templates_dir: /etc/cloud/templates/     upstart_dir: /etc/init/   package_mirrors:     – arches: [ i386, x86_64 ]       search:         regional:           – repo.%(ec2_region)s.%(services_domain)s           – repo.%(ec2_region)s.amazonaws.com   ssh_svcname: sshd

4. Finally, i have made the following changes in rc.local which will change the behavior of ssh service to accept password authentication. And change the preserve_hostname to false in /etc/cloud.cfg

if grep -Fxq “PasswordAuthentication no” /etc/ssh/sshd_config then  sed -i ‘s/^PasswordAuthentication.*/PasswordAuthentication yes/’ /etc/ssh/sshd_config  /etc/init.d/sshd restart fi

With these changes above I have achieved adding default user-name with Naveen and with the default password. With changes above to the instance above I have created an AMI from the instance using aws-cli

aws ec2 create-image --instance-id i-09ebf4e320b0cadca --name "ONE_AMI"

Output:
{
    "ImageId": "ami-ebec0c91"
}

#Cloud-config for setting hostname

With the Customized i can launch the instance with user-name Naveen but still, the hostname will be in the format like IP-<Private-IPv4>. So  I have used the below cloud-config script to change the hostname.

#cloud-config
#set the hostmachine name
fqdn: myhostname.com
#Add additional users for the machine
users:
 - name: sysadmin
   groups: [root,wheel]
   passwd: $6$G0Vu5qLWx4cZSHBx$0VYLSoIQxpLKVhlU.oBJdVSW7EllsvFybq.r/ZqWRuijiyTjPAXJzeGwYe1D/f94tt/tf1lXQYJtMtQLpvAqE1
   sudo: ALL=(ALL:ALL) ALL   
#Final Message
final_message: "The system is finally up xcvxxxxxxxxxxxccccccccccccccccccccccc, after $UPTIME seconds"

The above script will create the instance with hostname myhostname.com and create a user sysadmin. The above script will be passed as part of user-data when launching an instance

aws ec2 run-instance --image-id ami-4240a138 --count 1 --intance-type t2.micro --user-data file://cloud.cfg

The above launch an instance without Key pair which means I can only log into the instance using the default user Naveen or using a username we have created in cloud configuration script that was passed a user-data.

Finally with this i have the instance with my custom default user-name and password, and a hostname with myhostname.com.

Pin It on Pinterest