Docker CMD and ENTRYPOINT

In this post I am going to explain the difference between CMD and ENTRYPOINT. Requirements Docker Introduction Dockerfile Build And Layers Docker CMD Docker ENTRYPOINT Github example code for this post Give the docs linked above in the requirements a read if you haven’t already and you’ll be better off. If I run a command any of the files required to run the command should be in the Github Repo, and you should be able to run the commands as long as you are in that folder.

Dockerfile Build And Layers

In the introduction post we got some basic docker fundamentals which we are going to build on. Running and modifying a container locally is good for troubleshooting, or starting your build. The real goal is to be able to consistently build customized images that run our apps. Requirements Docker Introduction Dockerfile Reference docker build Github example code for this post Give the docs linked above in the requirements a read if you haven’t already and you’ll be better off.

Docker Introduction

Docker containers are everywhere in modern development and operations. If you walk into most any tech conversation today, within 10 minutes people will likely be talking about containers in some way shape or form because it’s that big right now. It’s that big for good reason, containers make it easy to devleop a product and have it work the same across different environments. They’re trying to solve the “It works on my system” problem.

Switching From Pelican To Hugo - Conclusion

Conclusion All in all I have only started to scratch the Surface of Hugo, but so far it seems really awesome. There appears to be a whole bunch of built in functions for image manipulation, translations, check them all out there. Within the metadata there is a ‘draft:’ key which when used without the hugo -D flag don’t show up which I really like. I updated my site, and republished it without these switching to Hugo posts, as I wanted to review and edit them, as long as draft was true they didn’t show up.

Switching From Pelican To Hugo - Pt3

Hugo has a different way of generating links so I need to do a couple of things to make sure the new blog doesn’t break old links. Pelican took the title of the post, replaced spaces with - and added .html at the end. ‘updating-makefile-to-a-python-script-clean.html’ Hugo takes the file name, I think (my urls happen to be what I name the file), and makes what they call “pretty” urls

Switching From Pelican To Hugo - Pt2

I have a bunch of posts for Pelican that I need to convert into Hugo syntax. I could modify each file by hand… But no, just no, well maybe some parts. We’ll use tools to make most of this happen. Steps I’m going to cover Copy Posts Script Post Update Update Metadata Let’s roll Copy Posts This step is going to be quick. At a high level my file structure looks like:

Switching From Pelican To Hugo - Pt1

While I have been reasonably happy with Pelican I don’t love it. It’s not really being maintained, themes are outdated, and I really don’t care to do front end work mainly because I don’t care how pretty something looks as long as it’s functional. Man are there a lot of pretty looking, useless websites out there. In order to build my new site I’m going to follow a similar process as I did with pelican.

Terraform Conditionals

Logic statements, if, else if, else are used everywhere in programming, including Terraform. The difference is in Terraform you need to get clever. I primarily use conditionals as feature flags within my variable files. A use case that I currently have is my dev environment needs a VPN to Datacenter A, but my prod environment needs a VPN to Datacenter B. In my Terraform code I have resources for both VPNs.

Terraform Count and Loops

When working with infrastructure there is a very good chance that we want more than 1 of some resource. We need more than 1 subnet, we need 4 instances. How can we accomplish that without having to explicitly declare each resource, we use the special ‘count’ key that exists for every resource type. Maybe we don’t want those subnets to have the same name, so we create a list of names to loop through.

Terraform Interpolation

Interpolation and why do we need it? When we use Terraform to create a resource, often we want to use information from that resource while creating another resource. An example that I used before is getting the IP address of an instance for use with a DNS record. I am using the sample code from Terraform Variables as a starting point. We have the ability to create a dev or test vpc with their own names and cidrs.

Terraform Variables

This post is going to start off with the basics, and then get move into intermediate level concetps. Requirements Intro to Terraform Terraform Variable Docs github example code for this post Give the docs linked above in the requirements a read if you haven’t already and you’ll be better off. Steps I’m going to cover Declare variables Assigning variables Using variables Variable files Let’s roll Declare variables

Introduction to Terraform

Up till now I have been using the AWS provided cli to manage their resources, but what if I also want to use Google Cloud? I would need to download their tools, as well as learn their configuration syntax. Terraform is a fantastic tool that gives us a consistent configuration syntax for managing many different providers. There are around 200 providers, 80 of which are supported directly by Hashicorp. Of course you still need to understand all of the provider specific terminology, ec2 for AWS, instances for Google Cloud.

AWS cli setup

In some of my previous articles I used the AWS Command Line Interface(cli) without ever explaining how to set it up. Some may say that using the cli is hard, but the syntax and usage of the cli is very strait forward. What is hard is knowing which service to use and how the service is supposed to be configured. A perfect example is when I setup a Cloudfront Distribution, the cli command was extremely simple, the json required for setting up the districution was not.

Create an AWS IAM user

For most of my articles I’ll likely be working with AWS in some fashion. One of the first things needed is an IAM user in order to act upon our account wheter from the console or the cli. I already have one setup but I’ll quickly show you how to setup your own. Requirements AWS Account - You’re on your own for setting up an account and logging into the console.

Updating Makefile to a Python script - Conclusion

Part1 Clean Part2 build run dev container Part3 upload to s3, argparse wc Makefile 19 68 586 Makefile wc newmake.py 57 149 1750 newmake.py I went through the entire process of converting my Makefile to Python. After all of the effort while python was generally faster, no human would ever notice. It took 3X more code to accomplish the same things in Python as it did make. Maybe my code can be optimized, but I don’t think much.

Updating Makefile to a Python script - upload to s3, argparse

I’m just jumping right into this one. Completed Makefile cat Makefile current_dir = $(shell pwd) current_container = $(shell docker ps -af name=gnoinski -q) clean: rm -rf output/* ifneq ($(current_container),) docker kill $(current_container) docker rm $(current_container) endif build: docker build -t gnoinski.ca:latest . dev: clean build docker run -td -p 8080:8080 -v $(current_dir):/site --name bengnoinskidev -u $(USER) gnoinski.ca:latest /bin/bash -c '/site/develop_server.sh start 8080 && sleep 1d' upload: aws s3 sync --delete output/ s3://ben.

Updating Makefile to a Python script - build run dev container

Since I already have most of the heavy listing done between call and check_output I think the rest of newmake.py should come together pretty quick. I also had the epiphany that while using argparse I will mimic Make and instead of having switch flags for the functions I’ll simply make it python3 newmake.py ACTION where ACTION is either clean, build, dev, or upload. Some info before we get started.

Updating Makefile to a Python script - Clean

I was over on yyjtech slack the other day when The Codependent Codr mentioned that he is using a Makefile for his project and someone replied “I really hope you just say the word Makefile because of very old habit.” I’m not against Makefiles, I use them at work and I started this project using one. For running a few quick commands, it’s really simple. When I was writing my cleanup script I found that sometimes my docker container would die.

Final Thoughts On Setting Up My Site

Part 1 - How this site came to be Part 2 - Uploading My New Site To S3 Part 3 - Setting up SSL Certs and Route53 Cert validation Part 4 - Setting up Cloudfront Distribution Part 5 - Invalidating Cloudfront Cache I’ve wanted to do something for quite some time, I just wasn’t sure what. The Codependent Codr gave me the kick I needed to get this started.

Invalidating Cloudfront Cache

In this post I will setup invalidating the Cloudfront cache everytime I upload my site. “Ben what the hell is the “Cloudfront cache”?” Good question. In the previous post I explained that Cloudfront only gets a requested file from your origin if a) It doesn’t already have it or b) the TTL has expired. So let’s say your site rarely ever changes and you have a TTL of 86400 (1 day) That means that if you update your site, the main page may not change show your latest article for up to 1 day depending on when it last requested it.

Setting up Cloudfront Distribution

In this post I will go through the steps that I took get my site setup behind Cloudfront. Info ~ Cloudfront sits infront of static assets such as html, images, or javascript. Cloudfront is a CDN (Content Delivery Network), it is not a webserver even though there may be some webserver like features that I’m not going to get into here. A CDN is a bunch of globally distributed servers that cache and distribute requested objects.

Set up ACM SSL Certs and Domain Validation with Route53

In this post I will go through the steps that I took get my site setup behind Cloudfront. That was wishfull thinking will happen in part 4 now I am starting with a domain that has nothing else on it, no subdomains, mx records nothing. I will be updating the domains Nameservers to Route53. WARNING ~ If your domain has existing records be very careful following this post, if you change nameservers without setting up all of your other records first your site(s) may stop working!

Uploading My New Site To S3

In this post I will go through the steps that I took to make my site live in a s3 website URL. Requirements AWS account AWS cli Steps I’m going to cover Create IAM user for myself Setup AWS cli Create S3 bucket for site Enable bucket versioning Enable website hosting Update Makefile to include upload to s3 Let’s roll Create IAM user for myself

How this site came to be

A co-worker was asking for some help on AWS Cloudfront/S3 in slack and I clicked on the link for the page he was setting up. I asked him what he was using and he said pelican and that it was based on Python which is the language I am most familiar with. I saw it supports markdown which I also enjoy using. He mentioned other people were using Jekyll which is based on ruby, so that was a haaaard no for me.

Exec vs Shell So far I have been showing CMD and ENTRYPOINT with a json array ie: CMD [“executable”, “arg1”, “arg2”] This is called the ‘exec’ form and is the Docker prefered form. And if you are using CMD with ENTRYPOINT you must use the ‘exec’ form. ‘shell’ is the alternate form to provide commands and would like: CMD executable arg1 arg2 What are the main differences. CMD or ENTRYPOINT ‘shell’ runs within a shell ‘/bin/sh -c’ allowing for environment variable expansion such as echo $HOME ENTRYPOINT using ‘shell’ prevents CMD from being used ENTRYPOINT using ‘shell’ is started as a subcommand of ‘/bin/sh -c’ so you need to run your executable with exec to ensure that it is started as PID1