Docker's been on my list of things to look at for a while - I though it was about time to see what all the fuss was about. Building something is a great way to learn a new technology so I decided to put together a little learning project. To build a very simple .NET Core web service and host it in a Docker container running in the cloud. I mean, how hard could it be?
After a lot of research on Google and a fair bit of trial and error I successfully implemented my little project. I thought it might be useful for others if I documented my little adventure. I've kind of gone down the tutorial route, some discussion and lots of step by step instructions.
In this article, I cover the following:
- An overview of Docker
- Create a super simple .NET Core Web Service
- Review some hosting options
- Setup hosting in AWS
- Get the .NET Core Web Service running in Docker, on AWS
The first task was to try and understand a bit more about Docker. Time to hit Google. It's a bit confusing as Docker is both the name of the company and the name of their platform. And people often say Docker when they actually mean Docker containers. As you'll see, I do it all the time.
The "What is Docker?" page on the Docker website is a start. The Docker documentation also contains useful information A brief explanation of containers. Googling around a bit, there's an interesting article on computerworld, The layman's guide to Docker. For a bit more detail, redhat has an article What is docker? as does Infoworld Docker Linux containers explained.
A lot of what's covered later in the article assumes you have at least an elementary understanding of Docker containers and images.
What's this Docker thing all about and why do I care?
In my humble opinion, it's all part of the ongoing evolution of software development towards the web and the Cloud. I was going to say away from desktop but that's not strictly true. Both Desktop and Mobile Apps are increasingly making use of web based services. It's becoming more and more common place for developers not only to use web based services but also to implement them.
Docker is all about developing and hosting these web based services. It's really an evolution rather than anything new. Containers (the technology that Docker implements) solve some problems more efficiently than existing solutions.
From the commercial point of view, containers are more efficient than Virtual Machines. On the same hardware you can run more containers than you could Virtual Machines, making better use of server hardware, reducing costs.
From the developers point of view, using containers can reduce problems caused by differences between the development and production environments. This is the classic problem where a new version of a service is released to production and it falls over in a heap. And the developers first thought is "but it worked fine on my dev machine". Containers make it easier to develop web based services in a production environment on your development machine. Reducing the chance of nasty surprises.
While containers are nothing new, Docker has created a relatively easy to use platform for the average developer. You may still need to work some command line magic but Microsoft recently added support for Docker to Visual Studio. This has made working with containers a lot easer and more accessible to Microsoft developers.
Creating a .NET Core Web Service
What's all this about Linux?
In case you missed it, Microsoft's latest version of .NET is now cross platform. Well, sort of. ".NET Core" is a version of the .NET platform that's cross platform and is open source. It supports most of the web side of things like ASP.NET MVC and what used to be Web API 2.0 (web sites and web services), but not desktop stuff like xaml, Windows Universal and all that. The desktop bits have been separated out. Presumably, Microsoft doesn't want people to start abandoning Windows in favour of Mac, Linux or Android. You'll still need to buy a Windows license to run desktop Apps.
A lot of I.T. commentators have been getting excited about running .NET Core apps inside Linux Docker containers. If you're a Microsoft shop then why on earth would you want to run .NET on Linux? Just run it on Windows, right? Well, that depends. For hosting services in the public Cloud then Windows isn't necessarily the best choice.
The reality is that the majority of the internet runs on Linux. Why? That's a bit of a thorny subject but one of the big reasons is cost. Linux hosting is invariably cheaper than Windows hosting. If you want robust and scalable services then you're going to be running lots of stuff in the Cloud and the costs soon add up. Believe me, if you've ever had to sit down with the CEO of a small company to explain an eye wateringly expensive Cloud hosting bill you'll know exactly why it matters!
Bear in mind also that whatever consumes these web based services won't care what OS they run on. That Android App that calls a web service to get it's data doesn't care what OS the web service is running on, just so long as it gets it data.
Running .NET on Linux enables Microsoft shops to continue developing .NET services but to host those services on nice cheap Linux servers in the cloud. And Microsoft will happily let you run those Linux servers in Azure.
I should mention that my Linux experience is fairly limited, but I'm going to give it a try.
Before writing any code, you'll need to install Docker on your development machine. Head over to the Docker website and install Docker for Windows or macOS.
At the time of writing, Docker support in Visual Studio for Mac was still in beta. I'm using Visual Studio 2017 for Windows. More info on the Visual Studio Tools for Docker page on Microsoft's website.
After you've installed Docker, it will run in the background. There should be a little whale icon in the system tray. If not, you'll need to get it up and running before continuing...
Create an ASP.NET Core Web Application project
The .NET Core web service that I wrote was of the “Hello World” variety. It needed to be super simple. I wrote a single GET method that returned the text “Hello World”.
Fire up Visual Studio. I'm using Visual Studio 2017 - Community Edition for this project.
Create an ASP.NET Core Web Application project
Using the main menu at the top of the Visual Studio window, select File > New > Project.
On the New Project Dialog select: Visual C# > .NET Core > ASP.NET Core Web Application.
Change the project Name to "HelloWorld" and select where you want the project files to be saved, and then click "OK".
Select the ASP.NET Core Web Application type
On the next dialog, we need to select the kind of .NET Core application to create.
Ensure that ".NET Core" and "ASP.NET Core 2.0" are selected in the drop downs at the top of the dialog.
Select the Empty project type.
Ensure that the "Enable Docker Support" checkbox is ticked and that "Linux" is selected as the OS.
If any of the above options do not appear then check that you have the latest version of Visual Studio 2017 installed.
Click the OK button to create the new project.
Review the Solution
Check out the solution explorer.
If everything has gone to plan you should have two projects in your solution. One is the ASP.NET Core Web application and the other a "docker-compose" project.
The "docker-compose" project is a new project type that instructs Docker to create an image when we build our solution. Docker uses the yml configuration files contained in the project. The default yml files that Visual Studio generated will be fine for our simple project. They instruct Docker to build an image containing a standard Linux OS, the Linux .NET runtime and to include the output of our .NET Core project in the Docker image.
Add a controller class
Our simple API will consist of a single HTTP GET method. To implement this method we'll need to add a Web API Controller class to the HelloWorld project.
Using the main menu at the top of the Visual Studio window, select Project > Add Class.
On the new item dialog that appears, in the tree view on the left select ASP.NET Core > Web > ASP.NET. Select "Web API Controller Class" on the right hand side.
Enter "HelloWorldController.cs" as the name and click "OK".
Implementing the HelloWorld API
The new class file created by Visual Studio already contains a bunch of methods. Remove all of them and add a single method as follows:
// GET: api
public string Get()
return "Hello World!";
Build and run the solution
Build the solution in the normal way. Assuming the build was successful, you should now be able to run the application. In the toolbar click the green Docker play button.
A web browser window should popup, with the url localhost:xxxxx, where xxxxx is a random port number. The web page should show the text "Hello World!".
What's going on?
There's a file in the HelloWorld project named "Dockerfile". This file tells Docker about the application's dependencies - what needs to be in the image for your app to function. In this case it tells Docker to include ASP.NET Core 2.0 in the Docker image as well as the output from building the HelloWorld project.
The docker compose project contains additional build information. Docker compose is actually a command line utility that's part of the Docker platform. Not unsurprisingly, it's used for building Docker apps and this project is essentially just a wrapper around the docker-compose utility. If you look at the build output you'll see Visual Studio running the docker-compose utility to do the build. The yml files inside this project list the services that make up the application.
Note that the docker compose project is set as the startup project, not the HelloWorld project. When you run the App, it doesn't look like anything different has happened than if you'd started a .NET Core web project. Ordinarily, your .NET Core web app would run locally on Windows. Visual Studio would attach the debugger to it and fire up a web browser with the url your app is running under.
What's actually happening is that Docker is firing up a container using your Docker image. Your app is running inside the container - on Linux. It attaches the debugger to your app inside the container and fires up a web browser to browse out to the app url. The docker-compose-override.yml file tells Docker to map port 80 inside the container to a random port outside the container - which is what the port number in the url in the web browser is.
So, just to be clear, the web browser is browsing out to the .NET Core app that's running inside the container, on Linux. When you debug the code in Visual Studio, it's debugging the app running inside the container. And it's this exact same Docker image that we're going to deploy to production.
We've successfully created a Docker image containing our super simple web service. Now we need to figure out where to host it.
We've developed a Docker image containing our app. Now we want to put it into what I guess could loosely be termed production - hosted in the cloud. But where? There are many Cloud vendors. Assessing all the various Cloud hosting platforms is not in the scope of this project. I'm looking for something relatively quick and easy. Since I've previously worked with AWS and Azure, I'm going to stick with them.
From what I can see, there are a couple of different ways to do container hosting, although they all seem to revolve around Virtual Machines at some level or other. Some Cloud vendors provide dedicated services for hosting containers. These kinds of services seemed to be aimed towards the enterprise level, designed with scalability and robustness in mind. This is great, but they don't come cheap. I'm not going to attempt to look at these in any detail - that's a whole other project.
The simplest way to host a Docker container is in a bog standard cloud hosted Virtual Machine. Pretty much all Cloud vendors support that. Goes without saying though that if the VM goes down or gets overloaded you're screwed. A bad idea for anything in the slightest bit mission critical.
Now, I'm just looking to play around with Docker so for this project we're talking low volume, non critical. I'm looking at keeping costs to an absolute minimum.
Let's take a very quick look at Azure and AWS.
Azure provides a container ecosystem called Azure Container Service. On the surface, it looks ideal. Check out the prices on the Container Service pricing page.
Just for the record, after I'd completed the project, Microsoft launched a new cheaper B-series VM tier which I didn't consider. The pricing is pretty damn good. Just bear in mind that (at the time of writing) B-series is still in preview mode.
Since I'm located in the UK, I've selected the “West Europe” region, British Pound (£), Month. Here are the relevant VM prices:
The pricing for those A series VMs looks pretty good. Now, I might be missing something but when you start reading more closely about Azure Container Service it starts talking about having a minimum of three virtual machines. One of these is a master that must be a D2 size instance. Hmmm...
So, for one D2 v3 instance and two A0 instances we're looking at £86.514 a month. I understand that this is an enterprise level system but that's a crazy high entry point. For me personally, at that price, it's a big no to Azure Container Service.
The other option is a simple Linux VM, prices are the same as the ones shown above. A single A0 instance would cost ~£9.982 a month. That's much more like it.
So how does this compare to AWS?
AWS also have a container ecosystem called “EC2 Container Service”. It looks like you can use a many VMs as you want. Going through some of the tutorials, it's looking kind of complicated and for this project I don't really have time to figure it out. Instead, let's keep it simple and look at pricing for simple Linux VMs.
Amazon calls it's VMs EC2 instances. Pricing is on the Amazon EC2 Pricing page.
Annoyingly, Amazon only provides hourly pricing and only in Dollars. Would it really be so hard to show an example monthly price? Or even in your local currency? Let's get our calculator out.
I'm assuming a month has 744 hours. Currency conversions were done via Google using today's exchange rates which will of course be out of date by the time you read this :) Bear in mind I'm looking at the EU (London) region.
AWS or Azure?
So I'm looking at an Azure A0 instance at £9.982 a month versus an AWS t2.micro at £8.04 a month. There may be other factors at play here which I've missed but taken on face value, AWS is the winner here. Nearly £2 cheaper a month than Azure and you get slightly more memory. There's not really a lot in it but I decided to go with AWS.
If your company subscribes to the Microsoft Action Pack or MSDN then you probably get free Azure credits every month so Azure is the no brainer option for you.
Getting started with AWS
Create an AWS account
If you don't already have one, browse out to the AWS console website and create yourself an account and login.
Create an Amazon EC2 Container Service Repository
After doing some research it looked like I'd need to use one feature of the EC2 Container Service to help get the Docker image off of my dev machine and on to the AWS VM where we want to run it. We need to create a repository to upload the image to. As far as I can see this is a free service. More information on the AWS Amazon ECR Repositories page.
EC2 Container Service
To create a repository we need to head over to the EC2 Container Service section. Click "Services" in top left of the header bar. Then click "EC2 Container Service" under the "Compute" section.
Create a repository
On the Amazon ECS page you've got three options listed on the left hand side. Clusters, Task Definitions and Repositories. Click Repositories.
A list of repositories will be displayed on the left. It'll be empty to start with. Click the Create repository button.
Step 1: Configure repository
Enter a Repository name. I used "docker_demo". Then click the "Next step" button.
Step 2: Build, tag, and push Docker image
A repository has now been created.
Take note of the big green box, it contains the repository name which we'll need to use later.
In my screen-shot, it's the bit that I partially blanked out, ending in ".amazonaws.com/docker_demo". This page also lists some useful commands we can execute from the command line to help us get our Docker image into the repository. We'll be using these later.
Keep this page open as you'll need to run some of these commands later on. Probably a good idea to cut and paste them in to a text document somewhere for safe keeping.
Unfortunately, uploading the HelloWorld Docker image to the repository requires a little bit of work. Time for some command line magic...
Uploading an image to the AWS image repository
Install the AWS command line tools
Before you can upload the image, you'll need to install the AWS command line tools on your dev machine.
Configure the AWS command line tools
Before you can use the AWS Command Line tools, you'll need to set them up. This is primarily a security thing - so that the tools can access your AWS account.
To do this, you'll need an AWS Access Key. Browse out to the Security Credentials page on the AWS website. Click the "Create New Access Key" button. On the dialog that appears click "Download Key File" and save it somewhere safe on your dev machine. Close the dialog.
Open the file you just downloaded. You'll find that it contains an AWSAccessKeyId and an AWSSecretKey.
On your dev machine, open the command prompt and run the following command:
You will be asked for your "AWS Access Key ID". Copy and paste in the AWSAccessKeyId from the file and hit enter.
Next, you'll be asked for your "AWS Secret Access Key". Again, copy and paste in the AWSSecretKey from the file and hit enter.
Then you'll be asked to enter the "Default region name". I'm not sure if this is important - you can change it later by re-running the "aws configure" command. You should be able to leave it blank and just hit enter.
Lastly, you'll be asked to enter the "Default output format". Again, I'm not sure if this is important. I left it blank and hit enter.
Assuming everything went well, the AWS command line tools should now be authorised to talk to your AWS account.
Logging in to the AWS repository
The repository we created in AWS is private so we're going to need login credentials. But there is no mention of login credentials on the AWS website. We're going to have to do a bit of command line magic to get them from AWS, via the AWS command line tools.
Note that later on we're going to need to login to the repository from the AWS VM. Make sure you keep hold of the "docker login" command we're about to get from AWS.
From the command line on your dev machine, run the first command listed on the "Step 2: Build, tag, and push Docker image" page which should be similar to this:
aws ecr get-login --no-include-email --region xxxxxxxxxxx
xxxxxxxxxxx is the region name.
This command should return a long bunch of text along the lines of this (it'll be a lot longer but you get the idea):
docker login -u AWS -p zzzzzzzzzzzzzzzzzz
zzzz part is a very long random looking string. In theory, this is the command you need to execute to log in to the AWS repository. Of course, life is never that easy. The command prompt has helpfully added carriage returns to make it look nice. You'll need to remove the carriage returns. Copy the text into the clipboard. Open your favourite text editor, paste in the command and remove the carriage returns so that all the text is on the same line. You should end up with something like this:
docker login -u AWS -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz https://xxxxxxxxxx.amazonaws.com
Be careful not to accidentally delete any characters or add spaces. Paste the command back in to the command prompt and run it.
You should get a message saying "Login Succeeded".
If you're using macOS, it's the same procedure but you may need to move the https://xxxxxxxxxx.amazonaws.com bit from the end of the command to the start, after the "login" bit and remove the "-e none" part.
Uploading an image to AWS
We don't want to deploy the debug version of our code to cloud so, do a release build. In Visual Studio, flip the build type to release in the tool bar and rebuild the solution.
There should now be a release build of the helloworld Docker image available to upload to the AWS repository.
You can list the available Docker images on you dev machine by running the following command:
In the list of images returned, there should be an image named "helloworld" with a tag of "latest".
Before we can upload the image to the AWS repository, we need to associate the image with the AWS repository. This is done using tags.
What are tags? In theory tags are just keywords that you give to an image. Tags are often used for versioning. But this is where it starts getting a bit confusing. Tags are also linked to repositories. Runnable.com has a description of how this works on their Manage and Share Docker Images page.
To associate the image with the AWS repository, we need to tag it and the tag needs to include the repository details. The tag we're going to use is xxxxxxxxxxxxx.amazonaws.com/docker_demo:latest. Where "latest" is the tag and everything before the colon is the name of the repository.
Copy the "docker tag" command - the fourth command listed on the "Step 2: Build, tag, and push Docker image" page. Don't run it yet. We need to change the third word from docker_demo to helloworld. So it looks something like this:
docker tag helloworld:latest xxxxxxxxxxxxx.amazonaws.com/docker_demo:latest
Run the command. Then, list the available Docker images on you dev machine again by running the following command:
There should be a new item in the list named "xxxxxxxxxxxxx.amazonaws.com/docker_demo:latest" with a tag of "latest".
We're now ready to push the Docker image up to the repository. Run the "docker push" command - the fifth command listed on the "Step 2: Build, tag, and push Docker image" page. It should look something like this:
docker push xxxxxxxxxxxx.amazonaws.com/docker_demo:latest
Docker will then upload the image to the AWS repository. It might take a few minutes the first time you run it so be patient.
Note that we'll need to use this command again later on so make sure you keep hold of it somewhere.
When the upload has completed, you should see the image listed on the AWS repository page. Browse out to the EC2 Container Service page. Click "Services" in top left of the header bar. Then click "EC2 Container Service" under the "Compute" section.
Select "Repositories" on the left hand side. A list of repositories will be shown. Click the docker_demo repository. A list of images in the repository will be shown. There should now be a row with an image tag named "latest".
We've now got a Docker image in AWS that we can pull down from a AWS VM. We now need to create a VM.
Creating a Virtual Machine in AWS
Provision a Virtual Machine (VM)
EC2 instances == Virtual Machines
Just to be confusing, AWS refers to Virtual Machines as EC2 instances.
To get to your Virtual Machines, click "Services" on the top left in the header bar. Then click "EC2" under "Compute" on the right hand side.
Select a region & create a VM
Before creating a VM, you need to select the region it will be located in. What you're essentially doing is selecting the physical data centre that will host your VM. Make sure you select the best region before creating your VM as moving it to a different region is a pain.
The currently selected region is displayed in the top right of the header bar. All the VMs listed on the page are located in that region. Click the region name in the header and a drop down list of all available regions will appear. Click a different region to change the currently selected region. Any new VMs you create will be located in that region.
You can create VMs in any region. It's usually best to locate them in a region that's as close to your users as possible.
To create a new VM, click either of the "Launch Instance" buttons.
Step 1: Choose a Machine Image
As the description on the page states, a Machine Image (or Amazon Machine Image - AMI) is a template containing an OS and applications. From a technical perspective, an AMI is really just the snapshot of a hard-disk / SSD which has had an OS and various Apps pre-installed on it. When you create a new VM, AWS clones the AMI to create the hard-disk / SSD for your VM.
Both Amazon and various third parties have created AMIs containing different Operating Systems and pre-installed applications that you can use as the basis of your VM.
We're going to select an AMI created by Amazon that contains Amazon's version of Linux. Amazon have optimised their version of Linux for AWS so it seems like a good choice for Linux newbies like me. Apparently, it's based on Red Hat Enterprise Linux (RHEL) and CentOS.
Click the "Select" button next to "Amazon Linux AMI...".
Step 2: Choose an Instance Type
This is the hardware (virtualised hardware) that your new VM will have. Since the VM will be running Linux and a super lite web service, a single CPU core and 1Gb of memory will be fine for our purposes. You can easily upgrade the hardware spec later on if needed. The better the hardware, the more you pay.
Select the t2.micro instance.
Click the "Next: Configure Instance Details" button at the bottom of the page.
Step 3: Configure Instance Details
This page contains additional configuration options for your new VM. You don't normally need to change anything here although I do like to change one setting on this screen. Tick the "Protect against accidental termination" checkbox. This setting just makes it harder to accidentally delete your VM.
Click the "Next: Add Storage" button at the bottom of the page.
Step 4: Add Storage
This is the size and type of storage that your VM will run on. That the AMI will be copied on to. Unless you need high performance, stick with the General Purpose SSD option.
The default SSD size is 8Gb. This may seem a little small but remember that we're installing a nice slim version of Linux rather than Windows. It should be plenty big enough but you can bump this up to 30Gb (or more) if you want.
Click the "Next: Add Tags" button at the bottom of the page.
Step 5: Add Tags
We're not interested in tags so go ahead and click the "Next: Configure Security Group" button at the bottom of the page.
Step 6: Configure Security Group
What are security groups? As the short explanation at the top of the page states, security groups are AWS's version of firewalls for VMs. AWS has a guide Using Network Security for more info.
Firstly, security groups are part of AWS's network infrastructure. They don't run on the VM itself, unlike Windows Firewall. It's all done outside the VM. This has the advantage that you don't need access to the VM itself to change it's firewall configuration.
Secondly, security groups are defined globally. Think of a security group as a shared firewall configuration. You configure each VM to use a security group. You can configure as many VMs to use the same security group as you like. The rules in the security group are applied to all of the VMs. This makes it a whole lot easier to manage the firewall configuration for multiple VMs.
By default, AWS will create a new security group for a new VM. That's fine for the first one but it's often better to create a single security group for a group of related VMs and configure them all to use the same security group. You can select an existing security group using the "Assign a security group" section. We'll select "Create a new security group" for this VM.
I'm not sure if this is still the case but after you created a security group you couldn't change the name. I set the name of our new security group to be "DockerServer".
You can modify the firewall rules later but let's set them up now, while we're here.
Security Group configuration
By default, the only protocol allowed through the firewall is SSH. If you don't know what SSH is, it's Secure SHell. Kind of like remote desktop only for the command prompt. You won't be able to remote desktop to this VM. Everything will be done from the command line.
Note the big warning at the bottom of the page. By default the "Source" for SSH contains "0.0.0.0/0". This means anyone, anywhere can try and connect to your VM via SSH. They'd need to know the login credentials but still, this is not ideal.
Ideally, you should change "0.0.0.0/0" to whatever your IP address is. So that your computer is the only computer that the firewall will allow to connect to SSH. Google "Whats my IP address". Enter your IP address and append "/32" on the end (don't ask). If your IP address is 126.96.36.199 you would enter "188.8.131.52/32". If your IP address changes you won't be able to SSH to your VM. Don't panic, you can change the IP address in the security group any time you want.
Since the .NET Core Web Service that will be running on this VM runs on HTTP, we'll need to enable HTTP in the firewall. Click "Add Rule". A new row will be added, below SSH.
For the new row, change the "Type" in the first column from "Custom TCP..." to "HTTP". For the purposes of this project, we want the web services to be accessible to everyone so we'll leave the source as "0.0.0.0/0, ::/0". The "::/0" bit is for IPv6, again, don't ask...
Click the "Review and Launch" button at the bottom of the page.
Step 7: Review
A summary of your the new VM configuration is shown for you to review. If you're happy with it, click the "Launch" button at the bottom of the page
A popup dialog will appear prompting you to configure a key pair for the VM.
Key pairs are all to do with security. A key pair is required to remotely connect to your VM. Each VM is associated with a specific key pair and it can't be changed after the VM has been created. You can create as many key pairs as you like - each one is named. Key pairs are stored globally in AWS and you can either create a new one for your VM or use an existing one. If you're managing lots of VMs then it's usually easier to use the same key pair for multiple VMs.
When you create a new key pair, you're given a pem file. You'll need this file to access any VM associated with that key pair. After you've created the key pair, there's no way to get the pem file back from AWS. If you lose it you're screwed - see the warning on the web page!
Enter a name for the key pair, I used "AWSKeyPair". Then click "Download Key Pair" and save the downloaded pem file somewhere safe.
Click the "Launch Instance" button to continue.
Allocate an Elastic IP address
Elastic IP Addresses
By default, AWS gives your VM a temporary IP address. That's fine until you restart your VM, at which point you get a new IP address. AWS allows you to have a fixed IP address for free. But you'll need to manually set it up.
More AWS terminology. AWS calls fixed IP addresses Elastic IP addresses.
To configure your Elastic IPs, head over to the EC2 Dashboard. Click "Services" on the top left in the header bar. Then click "EC2" under "Compute" on the right hand side.
Since you've now got a VM, the EC2 Dashboard gives you a bunch of useful options in the right hand pane.
You can modify your VMs by clicking "X Running Instances". To modify Security Groups click "X Security Groups". To modify your fixed IP addresses click "X Elastic IPs".
Click the "X Elastic IPs" link.
Allocate a new IP Address
The Elastic IPs page lists your IP addresses. Initially, it will be empty. Click the "Allocate new address" button.
On the page that appears, click the "Allocate" button. It should display a "New address request succeeded" message. Click the "Close" button.
Associate the new IP Address with your VM
The Elastic IPs page should now list the new IP address. At this point, you've reserved an IP address but it's not associated with any particular VM.
Note that AWS will charge you for any IP addresses while they are not associated with a VM. I guess this is to encourage people not to horde unused IP addresses.
To assign the IP address to your VM, make sure the new IP address is selected in the list by clicking on it. Then click the Actions button at the top of the page. In the drop down menu that appears, select the Associate address option.
On the Associate address page, leave the Resource type set to Instance.
Then click on the Instance drop down to show a list of VMs that you can associate the IP address with. There should only be one listed. In the screen-shot mine is named Docker - the name on yours will probably be blank - click it.
Ignore the Private IP and Reassociation fields and click the "Associate" button at the bottom of the page.
You should get a success message.
You'll be returned to the Elastic IPs list. The Instance and Private IP address columns should now be populated to show that your IP address is now associated with a VM (instance).
That's it. You've now got a Linux VM with a fixed IP address. Next we need to get remote access to the VM so that we get the Docker image on to it, and start a container.
Getting a Docker image onto your VM
Install an SSH client
SSH (Secure SHell) is used to remotely control a VM via the command line. You'll need to use SSH to configure Docker on the AWS VM. Amazon has some documentation on how to connect to your VM using SSH Connecting to Your Linux Instance Using SSH.
The first thing you need to do is install an SSH client on your dev machine. If you're using macOS then you've already got SSH. Just fire up the terminal.
Sadly, Windows doesn't come with an SSH client but there are several options.
If you've got Git for Windows installed then you already have an SSH client - it's just hidden away. You can access it by adding it's location to your path. Fire up PowerShell and run the following commands:
$new_path = "$env:PATH;C:/Program Files/Git/usr/bin"
[Environment]::SetEnvironmentVariable("path", $new_path, "Machine")
The HurryUpAndWait website has an article Need an SSH client on Windows? Don't use Putty or CygWin...use Git.
Another alternative is to install the Windows Subsystem for Linux. The HowToGeek website has instructions How to Install and Use the Linux Bash Shell on Windows 10. If you do follow these instructions, you only need to follow the first part, about installing the Bash shell. Ignore all the other bits about installing Linux software.
There are other SSH Clients for Windows - give Google a search...
Connecting to the VM using SSH
Instructions are on the Amazon web page Connecting to Your Linux Instance Using SSH.
Fire up the command prompt. You'll need to locate that pem file you downloaded earlier, when you created the VM. In the command prompt, change the folder to the folder containing the pem file.
If you're running on Linux or macOS, execute the following command. Replace xxxxxxxxxx with the name of your pem file:
chmod 400 xxxxxxxxxx.pem
Now we need to run SSH. Use the following command. Again, replace xxxxxxxxxx with the name of your pem file. Also, replace xx.xx.xx.xx with the IP address of your VM. That's the Elastic IP you associated with your VM earlier.
ssh -i xxxxxxxxxx.pem email@example.com
If you get a Permission denied (publickey) error then most likely you're using the wrong pem file or IP address.
If everything went OK then the VM will return a welcome message, something like the screen-shot. Your command line session is now connected to the remote VM.
Note the [ec2-user@ip-xx-xx-xx-xx~]$ immediately before the cursor. This tells you you're logged in to the VM and any commands will be executed remotely on the VM.
If there is a message about packages needing to be updated for security like there is in the screen-shot, I would recommend running the suggested command. In this case "sudo yum update".
We've now got remote access to the VM. We can now go ahead and get Docker sorted out on the VM.
Strangely, although the Amazon Linux AMI description mentions Docker, it wasn't installed on my VM. You can check by running the command "docker". I had to install it myself. Run the following command:
sudo yum install docker -y
This should download and install docker. The install process should finish with the message "Complete!".
Docker has been successfully installed. Now we need to start Docker - it runs as a background service. Run the following command:
sudo service docker start
Log in to the AWS repository
Remember when you logged in to the repository from your dev machine? You saved the "docker login" command, right? You'll need to run it on the VM. But, we'll need to prepend it with "sudo ". In the SSH command prompt type "sudo " (with the space but without the quotes) and then paste in the "docker login" command and hit enter.
You'll get a "Login Succeeded" message if that worked.
Pulling down an image to the AWS VM
We're going to do the opposite of the "docker push" command we ran from the dev machine. Go back and find the command you ran. It was the fifth command listed on the "Step 2: Build, tag, and push Docker image" page. It should look something like this:'
docker push xxxxxxxxxxxx.amazonaws.com/docker_demo:latest
Take the command and replace the first part, "docker push" with "sudo docker pull". So it looks something like this:
sudo docker pull xxxxxxxxxxxx.amazonaws.com/docker_demo:latest
Copy and paste the command into the SSH command prompt and hit enter.
Docker will then download the image from the repository to the VM.
So now, after all that effort, we can finally fire up a container! Run the following command. Replace the xxxxxxxxxxxx bit with the same text you used in the preceding commands.
sudo docker run -d -p 80:80 xxxxxxxxxxxx.amazonaws.com/docker_demo:latest:latest
Note that 80:80 tells docker to map port 80 on the Linux VM to port 80 inside the container. You can change this as needed. If you use a different port, you'll need to add it to the Security Group in AWS otherwise the firewall will block it.
To test that it's working simply open your web browser and browse out to the fixed IP address - the Elastic IP you assigned to the VM. The response should be the text "Hello World!".
Here are a few useful docker commands that you can run either on your dev machine or on the SSH command prompt. If you run any of the commands from the SSH command prompt, you'll need to prefix them with "sudo ".
To list the Docker images:
To list the Docker containers:
To stop a running container. First run the docker ps command above, find the id of the container to stop and then replace <container_id> with the container id:
docker stop <container_id>
To remove a container:
docker rm <container_id>
To remove all containers:
docker rm $(docker ps -aq)
To remove an image:
docker rmi <image_id>