DevOps Post Series : 2, How to install and configure SSL/TLS Certificate on AWS EC2

DevOps Post Series : 2, How to install and configure SSL/TLS Certificate on AWS EC2


It is assumed, you have launched an EC2 instance with a valid Key, configured the Security groups, Installed Apache/Nginx and have deployed your app.


Now, its time to configure your TLS/SSL certificate. Why would you want to configure your own certificate, when you can get Amazon to issue a free TLS/SSL certificate? Well, there are more than a few use-cases that we have come across.

  1. First and Foremost is, AWS Certificate Manager certificates can be installed only on Elastic Load Balancers, Amazon CloudFront distributions, or APIs for Amazon API Gateway. (At the time of writing)
  2. You are building a staging/testing server and would test integrations in it and require SSL/TLS.
  3. You are just starting off, and have only one EC2 instance to start with. (you cannot install AWS provisioned certificate on a EC2 directly)
  4. Provisioning a new service, say for data exchange for your customers with their customers/vendors etc, and will be a very under utilised service.
  5. Planning an endpoint for SSO/OpenID etc. and prefer to have this part as logically different than your or

And at least a dozens other use-cases that comes to my mind, but leaving out for brevity.

Getting Started

Self-Signed Certificates:

Firstly, enable apache  in your EC2 Instance and install/enable ssl.

(As usual, I’ll try to give the instruction for both RPM and DEB package based distributions)

sudo systemctl is-enabled httpd

This should return “enabled” if not, enable it by typing the following,

sudo systemctl start httpd && sudo systemctl enable httpd sudo yum update -y sudo yum install -y mod_ssl

And Follow the on-screen instructions, You would have answered some basic questions like domain Name, Country, Email ID etc. And if you accepted the default locations from the prompt, you would have generated 2 files in the following locations.

/etc/pki/tls/private/localhost.key – This is an auto-generated 2048-bit RSA private key for your Amazon EC2 host. You can also use this key to generate a certificate signing request (CSR) to submit to a certificate authority (CA).
/etc/pki/tls/certs/localhost.crt – This is a self-signed X.509 certificate for your server host. This certificate is useful only where you can control the “client” environment, like a testing or staging server.

Now, restart the apache

sudo systemctl restart httpd

And try https://your-aws.public.dns or https://[yourpublicip].

Since you’re accessing your site with a self-signed, untrusted host certificate, your browser may display a series of security warnings. But, once you added it to the exception list, you should be good to go. This would be the end of it, if you’re only looking for a certificate to be used for staging/other controlled environments. If you want a public facing SSL, so your users/customer can login and access this new service,

CA-Signed Certificate

– Go to /etc/pki/tls/private/  and generate a new private key

sudo openssl genrsa -out virtualserver1.key 2048

This generates an RSA key that is identical to the default key. You can generate a 4096-bit key, not use RSA altogether and depend on some other mathematical models as well. But those are beyond the scope of this post.

sudo chown root.rootvirtualserver1.key
sudo chmod 600 virtualserver1.key
ls -alvirtualserver1.key 

Now, you can use this key to generate a Certificate Signing Request

sudo openssl req -new -keyvirtualserver1.key -out csr.pem

When you do this, OpenSSL will open a series of prompts for all sorts of data, the “CommonName” is one thing which is Mandatory for your to get a certificate. All other data requested by it are optional. Once you’re done with that, you should have generated a csr.pem.

Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying the contents into a web form. At this time, you may be asked to supply one or more subject alternate names (SANs) to be placed on the certificate.

Remove or rename the old self-signed host certificate localhost.crt from the/etc/pki/tls/certs directory and place the new CA-signed certificate there (along with any intermediate certificates).

Once you’ve copied the contents of the .key file in the form and submitted it with your CA, you would have received an Email confirming the “Issue” of the certificate. Once its done, you can check your application in Https, now it should be with a “green padlock”. Meaning fully secure.

However, you can run a security test on your SSL, just go to SSLLabs and start a test by giving your URL. After about 2-5 mins, you would receive a rating and details. SOmthing similar to the following image.

That’s It! You’re done.

DevOps Post Series : 1, How to install and configure LAMP on AWS EC2

DevOps Post Series : 1, How to install and configure LAMP on AWS EC2

In this #DevOps centric series of blog posts, I will write about some of the interesting yet common problems and their solutions or quick guides and how-tos. This is the result of setting up a new #Datacenter setup for the #Startup I am working.


In this post, I will assume that you have already launched an EC2 instance type with the operating system of your choice. Generally, Amazon Linux (based on RedHat/CentOS) or Ubuntu is the preferred OS of choice. In case you prefer an exotic flavour of Linux, which does not support either the rpm/yum(RHEL/CentOS/Fedora/AMI) or apt (Debian/Ubuntu and derivatives)  this article may not be of much use to you.

  1. Connect to your instance – Use the private key you downloaded during the ec2 launch.
    1. If you’re in Linux or Mac – use the following by replacing it with your private key name and instance’s public dns –  ssh -i "loginserver."
    2. If you’ve launched an Amazon Linux, use “ec2-user” instead of “root”
    3. If you’ve launched an Ubuntu Linux, use “ubuntu” instead of “root”
    4. another important thing is to ensure that the private key has 0400 privilege and it is “owned” by the “User” as who you’ll execute the ssh connection.
  2. Update your package manager
    1. Amazon Linux : sudo yum update
    2. Ubuntu Linux: sudo apt-get update
  3. Tools & Utils (Optional/Personal Preference) I normally prefer to have a couple of tools installed in the server for quick-hacks/edits, monitoring etc.
    1. Amazon Linux : sudo yum install -y mc nano tree multitail git lynx
    2. Ubuntu Linux: sudo apt-get -y mc nano tree multitail git  lynx
      1. For details on the above-mentioned tools, refer the bottom of the article.
  4. LAMP Server
    1. Amazon Linux :sudo yum install -y httpd24 php70 mysql56-server php70-mysqlnd mysql56-client
    2. Ubuntu Linux: sudo apt-get install mysql-client-core-5.6 mysql-server-core-5.6 apache2 php libapache2-mod-php php-mcrypt php-mysql
      1. Your operating system will start to download and install the specified software, as for MySQL, you will be prompted for a root password. After installation, I strongly recommend you to run mysql_secure_installation and proceed with the onscreen instructions.
      2. Some of the critical things to do are remove the “test” db, remove access to "root"@"%", others are optional.
      3. The optional steps are,
        1. remove the anonymous user accounts.
        2. disable the remote root login.
        3. reload the privilege tables and save your changes.
  5. Configuration and other dependencies
    1. Amazon Linux :
      sudo yum install php70-mbstring.x86_64 php70-zip.x86_64 composer node -y
    2. Ubuntu replace yum install with apt0get install

Finally, restart the services and off you go. You have successfully installed LAMP server in EC2. Now, go to your browser and enter the publicDNS of the ec2 instance and you should be able to see the default apache page.  If you get either a timeout or not found error, it may mean you have to configure the security group accordingly. You should “ALLOW” port 80/443 (http/Https) in the security group.







How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

History & Theory:

Digital Advertisement:

I get it. Ads are a necessary evil in content delivery game. Hell, I have been in the engineering side of content delivery for 10 yrs myself.  So, back in the days of #dotcom #bubble, we endured Banner ADs. When the #BigBrother, oops #Google came up, they swept the market clean with their (initially, atleast) non-intrusive text ADs. And people even appreciated the contextual advertisement, just when you were searching for a suspension for your car, you see 4 different ADs for OEM grade replacement suspension, grease monkeys to install them and so on.

Fast forward 10 yrs and Google is the global powerhouse of advertisement. Google knows what your mothers’ cousin` once removed does like and runs ads tailored to it in no less than 50 websites run by Google and countless other affiliates. The convenience transformed itself into a mild hindrance and a major nuisance in no time.  In its core, Google, Microsoft and Yahoo ADs were all based on a relevance relevance engine. I.E. based on the content that is currently served by the publisher (website you’re visiting) they search for the relevant ADs from their database and one that matches and has the target profile matching yours (this is where privacy advocates go crazy) they serve this AD. In its simplest form the process look something like the below diagram.

ADsense process diagram
Contextual Advertisement – Process Flow (ADSense/ADWords)

For the inquisitive lot, who want to know the technicality, it looks a lot more complex than this and it is presented below.

Tentative Process flow of How ADWords and ADSense content advertisement happens.

Enter ADBlockers:

And soon, people found a way to block the ADs. As seen above, All of these Adverts are programmed to run using great stores of data from the backend. So, when a user visits site a lot happens in the backend and a script is used get the resultant AD piece. Technically inclined people started writing custom scripts that would stop this script which renders the ADs. In no time all the bells and whistles like #blacklist #whitelist #regular-expression support all came in. Once the modern browser came with support for content filtering built-in, it was easy to supplement them with custom lists and scripts to block these ads. And ADBlockers for every device, OS, browsers became available and public knowledge of the same exploded their use in around 2013-2015 period. (see graph below) . So, All seems rosy from here.


Publishers and their representative trade bodies, on the other hand, argue that Internet ads provide revenue to website owners, which enable the website owners to create or otherwise purchase content for the website. Publishers claim that the prevalent use of ad blocking software and devices could adversely affect website owner revenue and thus in turn lower the availability of free content on websites. So, there is no wonder that publishers have begun to block or evict users found to be using #ADBlockers. (A page from my personal experience, I do not remember a time when I did not use ADblock, before Mozilla, I used MyIE (Maxthon) which had this configurable filters). But, off-lately the publishers have become more aggressive and have rolled out a slew of their own warriors. AKA ADBlocker-Blocker. Which are nifty little utilities you can embed in your site and traffic from ADB enabled users will be blocked until they disable or whitelist you.  Some majors like Economist, Wired and others have announced a novel approach, either you can disable ADB on their site or pay a small fee to see their site without the clutter of advertisements. For the sites that do not offer this feature or If you wish to  simply override them, read on.

Practice & Implementation

So, enter Anti-AdBlocker Killer —

It’s simple, really: it tricks sites that use #anti-adblocker technology into thinking you aren’t using an adblocker. The #adblocker-blocker lets you keep your adblocker on when you visit a page that would usually disable it by using a JavaScript file and filter list. This means you can work around bans on adblockers from common news companies, like Forbes, which lock you out when you’re detected.

It works against a number of different technologies used to detect #adblock users, and is likely to be a part of the next #armsrace as publishers work out how to block the #adblockers using #adblocker-blockers. If you’re still reading, I will conclude my narration and give step-by-step instruction on how to enable it and activate.

Step-by-step Instruction to Activate Anti-Adblock Killer

  1. Step 1 – Get a Script Manager:
    1.  Greasemonkey or Scriptish
    2.  Tampermonkey or Native
    3.  Tampermonkey or Violentmonkey
    4.  Tampermonkey or NinjaKit
    5.  Tampermonkey
        • (* After installation, depending on your browser, may require a browser restart for it to effect)
  2. Step 2 – Subscribe to a FilterList
    1. Subscribe from (I prefer this)
    2. Subscribe from 
      • At this point, if you chose Github list, you’ll be prompted with a list of Extension and you can chose to Manualy install AAKiller. (representative screenshot is shown below) 
  3. Step 3 – Get User Scripts
    1. Install from
    2. Install from
    3. Install from
    4. Install from

Once this is done, you’re on your way to enjoy AD-Blocker pop-up free browsing.

More data to support Planet Nine Hypothesis

More data to support Planet Nine Hypothesis

Last year, the existence of an unknown planet in our Solar system was announced. However, this hypothesis was subsequently called into question as biases in the observational data were detected. Now, Spanish astronomers have used a novel technique to analyse the orbits of the so-called extreme trans-Neptunian objects and, once again, they report that there is something perturbing them—a planet located at a distance between 300 to 400 times the Earth-sun distance.

Like the comets that interact with Jupiter.
At the beginning of 2016, researchers from the California Institute of Technology (Caltech, USA) announced that they had evidence of the existence of this object, located at an average  of 700 AU and with a mass 10 times that of the Earth. Their calculations were motivated by the peculiar distribution of the orbits found for the trans-Neptunian objects (TNO) in the Kuiper belt, which suggested the presence of a Planet Nine within the solar system.

Using calculations and data mining, the Spanish astronomers have found that the nodes of the 28 ETNOs analysed (and the 24 extreme Centaurs with average distances from the Sun of more than 150 AU) are clustered in certain ranges of distances from the Sun; furthermore, they have found a correlation, where none should exist, between the positions of the nodes and the inclination, one of the parameters which defines the orientation of the orbits of these icy objects in space.

“Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU,” says De la Fuente Marcos, who emphasizes: “We believe that what we are seeing here cannot be attributed to the presence of observational bias”.

Is there also a Planet Ten?
De la Fuente Marcos explains that the hypothetical Planet Nine suggested in this study has nothing to do with another possible planet or planetoid situated much closer to us, and hinted at by other recent findings.

Also applying data mining to the orbits of the TNOs of the Kuiper Belt, astronomers Kathryn Volk and Renu Malhotra from the University of Arizona (USA) have found that the plane on which these objects orbit the Sun is slightly warped, a fact that could be explained if there is a perturber of the size of Mars at 60 AU from the Sun.

“Given the current definition of planet, this other mysterious object may not be a true planet, even if it has a size similar to that of the Earth, as it could be surrounded by huge asteroids or dwarf planets,” explains the Spanish astronomer.

“In any case, we are convinced that Volk and Malhotra’s work has found solid evidence of the presence of a massive body beyond the so-called Kuiper Cliff, the furthest point of the trans-Neptunian belt, at some 50 AU from the Sun, and we hope to be able to present soon a new work which also supports its existence”.

India Lauches PSLV C38 with 30 Satellites

India Lauches PSLV C38 with 30 Satellites

The Indian Space Research Organization (ISRO) successfully launched on Friday PSLV-C38 rocket on a mission to send 31 satellites, including India’s Cartosat 2 and NIUSAT satellites along with 29 foreign nano satellites, into orbit, ISRO said in a press release.

“India’s Polar Satellite Launch Vehicle, in its 40th flight (PSLV-C38), launched the 712 kg [0.7 tonnes] Cartosat-2 series satellite for earth observation and 30 co-passenger satellites together weighing about 243 kg [0.2 tonnes] at lift-off into a 505 km [313 mile] polar Sun Synchronous Orbit (SSO),” ISRO said.

According to ISRO, the co-passenger satellites comprise 29 nano satellites from 14 countries namely, Austria, Belgium, Chile, the Czech Republic, Finland, France, Germany, Italy, Japan, Latvia, Lithuania, Slovakia, the United Kingdom and the United States as well as one nano satellite from India.

Google launches new TensorFlow Object Detection API

Google launches new TensorFlow Object Detection API

Object Detect API

Google has finally launched its new TensorFlow object detection API. This new feature will give access to researchers and developers to the same technology Google uses for its own personal operations like image search and street number identification in street view.

The company was planning to release this new feature for quite a few time and finally, it is available to open source community. The system which the tech company has released won a Microsoft’s Common Objects in Context object detection challenge last year. The company won the challenge by beating 23 teams participating in the challenge.

According to the company, it released this new system to bring general public close to AI, and also get help from developers and AI scientist to collaborate with the company and make new and innovative things using Google’s technology.

Google is not the first company offering AI technology to the general public, user and developers. Microsoft, Facebook, and Amazon have also given access to people to use their respective AI technology. Moreover, Apple in its recent WWDC has also rolled out AI technology named as CoreML for its users.

One of the main benefits which the company is offering with this new release is giving users to use this new technology on mobile phones through its object detection system. The system is based on MobileNets image recognition models which can handle and do tasks like object detection, facial recognition, and landmark recognition.

Internet fast lane for first responders

Internet fast lane for first responders

Time is the enemy of first responders, and communication delays can cost lives. Unfortunately, during natural disasters and other crises, communications — both cellular and internet — are often overloaded by friends and family reaching out to those in affected areas. That extra network traffic has in the past impacted the ability of first responders send and receive data.

Researchers at the Rochester Institute of Technology are testing a new protocol — developed with funding from the National Science Foundation and U.S. Ignite — that will allow first responders and emergency managers to send data-intensive communications over the internet regardless of the amount of other traffic eating up the available bandwidth.

The protocol — dubbed MultiNode Label Routing, or MNLR — runs below existing internet protocols, allowing other traffic to run simultaneously. Rather than using traditional transmission protocols, it uncovers routes based on the routers’ labels, which in turn carry the structural and connectivity information among routers.

It also features an immediate failover mechanism so that if a link or node fails, it uses an alternate path as soon as the failure is detected, which also speeds transmission.

According to Nirmala Shenoy, professor in RIT’s Information Sciences and Technologies Department and principal investigator of the project, the protocol is designed to give transmissions over MNLR priority over other traffic so that critical data isn’t lost or delayed.  “Because MNLR literally bypasses the internet protocol and other routing protocols, it can put other traffic on the Internet to a lower priority,” she said.

According to the RIT team, the protocol’s ability to prioritize transmissions solves problems encountered during recent major hurricanes when first-responders and emergency managers had difficulty transmitting large but critical data files such as LIDAR maps and video chats.

“Sharing data on the internet during an emergency is like trying to drive a jet down the street at rush hour,” Jennifer Schneider, RIT professor and co-principal investigator, told RIT news.  “A lot of the critical information is too big and data heavy for the existing internet pipeline.”

Another communications challenge during disasters is damage to network routers.  Accordingly, the team built capabilities into the MNLR protocol to get around routing limitations of the major existing internet protocols.  Specifically, the team included a faster failover mechanism so that if a router link fails, the transmission will be automatically rerouted more quickly than existing protocols support.  In testing the protocol during a link failure, the team found that MNLR recovered in less than 30 seconds, while Border Gateway Protocol required about 150 seconds.

While the team is continuing to refine the protocol, Shenoy acknowledged that deployment of MNLR will face two hurdles.  First, the protocol has to be broadly adopted into current routers.  “That depends on equipment vendors,” she said.

Secondly, heavy use of MNLR during an emergency will impact other internet traffic.  While most service providers and even customers are likely to be OK with that, service agreements will likely need to be modified to account for variable service during emergencies.

Amazon Linux now available for On Premise Development/Testing

Amazon Linux now available for On Premise Development/Testing

Finally, (Well from November 2016) Amazon Web Services is letting customers download its own flavour of Linux.

Cloud instances has often been suggested as the ideal test and dev environment, on cost avoidance grounds. AWS says it’s made its Linux available after customer requests to do more development on-premises. Those requests don’t represent a bursting of the cloud bubble, but it’s nonetheless notable that developers feel the need to do some testing without paying for it by the hour.

More often than not, we feel a need to deploy a local developmental or testing instance of our app/product. We have all gone through the same routine,

  1. Developer Testing & Code Reviews
  2. Automated Test with CI
  3. Meticulous testing and validation by QA
  4. Deploy in Staging

And All hell broke loose on you!!!

It could be anything ranging from a simple charset/locale to wrong version of JVM or a libriary or other. Sometimes, it could be something sinister like the AWS A/ELB handling the request other than the way your app server intended to handle!

So, many Devops, SCM and Product owners prayed to the almighty altar of AWS. And their prayers have been answered.

The cloud giant’s chief evangelist Jeff Barr made the announcement in this blog post.

The company has loosed its Linux Container Image to assist those planning a move into its cloud can test their software and workloads on-premises. Previously the image was only accessible on-cloud, for customers running virtual machine instances on AWS.

The image is available from the EC2 Container Registry (read Pulling an Image to learn how to access it). It is built from the same source code and packages as the AMI and will give you a smooth path to container adoption. You can use it as-is or as the basis for your own images.

And Here you Go!!


Image Courtesy: Amazon Inc. (c)


Ubuntu Retires Unity, Desktop Switching Back To GNOME

Ubuntu Retires Unity, Desktop Switching Back To GNOME

Approximately Six year ago, Canonical made #Unity the default shell in Ubuntu. This was acclaimed to be a step to bring Ubuntu to hitherto unavailable devices like Tablets and Mobiles. I personally was not amused. The unity shell was available in one for or the other for sometime before that. Namely in the netbook remix of the Ubuntu.

Last week, Mark Shuttleworth, Founder of Ubuntu and Canonical, confirmed in a post on the company’s official blog today that the company is giving up on Unity and that the default Ubuntu desktop will be shifted back to GNOME for Ubuntu 18.04 LTS.

Shuttleworth reiterated the company’s commitment to the Ubuntu desktop that millions of users across the globe rely on. It says that Canonical will continue to produce this open source desktop, maintain existing LTE releases, work with commercial partners to distribute Ubuntu, and provide support to corporate customers. Nothing is changing on that front.

He points out that the community viewed the Unity effort as fragmentation and not innovation even though the aim was to deliver it as a free software, an alternative to the closed alternatives currently available to device manufacturers.

It is out of respect for the market’s wishes (or mounting pressure from the community) that Canonical has decided to shelve this project and shift the desktop back to #GNOME starting next year.


Bezos sells $1 bn in Amazon stock yearly to pay for Blue Origin

Bezos sells $1 bn in Amazon stock yearly to pay for Blue Origin

Yesterday, Billionaire entrepreneur Jeff Bezos introduced the Blue Origin capsule to the press corps.

Speaking Wednesday at the 33rd Space Symposium in Colorado Springs, Colorado, Bezos vowed to lower the cost of space travel and start taking customers to space by next year. Jeff Bezos said he is selling $1 billion in stock of his retail giant Amazon each year to finance his rocket company, Blue Origin, which aims to carry tourists to space by 2018.

The entrepreneur did not say how much a ticket would cost, as he showed off the New Shepard rocket and a mock-up of the large-windowed capsule that tourists will one day ride to suborbital space — just past the Karman Line some 62 miles (100 kilometers) above Earth — and back.

Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.

Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.

“My business model right now for Blue Origin is that I sell about $1 billion a year of Amazon stock and I use it to invest in Blue Origin,” he said.

“It’s very important that Blue Origin stand on its own feet and be a profitable, sustainable enterprise. That’s how real progress gets made.”

Bezos, a lifelong space enthusiast, founded Blue Origin in 2000.