This repo contains a digitized version of the course content for CYBR8470 Secure Web App Development at the University of Nebraska at Omaha.
Application containers
follow a similar philosophy.
Good fences make good neighbors:
Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules.
Encapsulation is an object oriented concept where all data and functions required to use the resource are packaged into a single self-contained component.
Process Isolation keeps separate functions from accessing the same memory.
Deploy, run and publish a container
Manage container interactions
Setting up a dev environment
Additional Resources
Acknowledgements
License
Let’s start with a container based on Alpine Linux
First, we need to download a container blueprint called an Image
# Download alpine container image from Docker Hub
docker pull alpine
You should see some download activity. What just happened?
Docker hub is a registry of images from authoritative sources and individual users
latest
image is downloaded. This label is called a TagLet’s check locally available images and note their sizes.
# Check images available locally on your machine
docker images
Observation: Docker images are much smaller than typical Virtual Machines
Let’s create and start a container from the alpine image
# See list of docker commands
docker
# `run` executes a command in a new container (creates it too)
# -it provides an interactive tty shell into the container
# --name provides a name for your new container
# `alpine:latest` is the image name and its tag
docker run -it --name myAlpine alpine:latest
if the previous command was successful, the container is created and you are returned an interactive shell into the container. The shell looks like this: / #
Try some commands in the container shell
# Some commands to try
whoami # enuf said!
cd / # Switch to the root directory
ls -la # List contents of the root directory
ping google.com # Hit CRTL+C to exit
exit # Stop the shell to exit container
root
in the container!docker ps -a
docker rm <container-ID or name>
Volumes have to be initialized at container creation time
-v
option mounts a volumeIn a new Powershell
:
# Create a new host directory called app
mkdir app
# Host /app folder mapped to container /webapp folder
# `\` allows you to continue a long command on a new line
docker run -it --name myAlpineWithVol -v /c/Users/student/app:/webapp alpine:latest
You may get a prompt to share the C: drive with Docker. Accept that and enter the your account password. Once access is granted, a container shell will be returned.
Caution 😡:
By default, a mounted volume allows full read/write by the container
This allows exceptions to the Process Isolation
principle
-v /c/Users/student/app:/webapp:ro
-v /c/Users/student/app:/webapp:cached
In a new Powershell
:
# Change directory to the `/app` directory on the host
cd app
# Create a new file and add some text
set-content test.txt "Nebraska Gencyber Rocks"
Back in the container
shell:
# List contents of the test.txt file
cat test.txt
# You should be able to see the file contents
Observation: Container and host are able to share files.
It is always good stop containers when not in use to free up system resources.
In a new Powershell
:
# List all running containers
docker ps
# Stop a running container
docker stop myAlpineWithVol
Services are bound to container ports. We need to expose container ports to the network to access these services remotely.
Let’s create a container that runs an HTTP server in two commands! First, download an image for Lighttpd from Docker Hub
In a new Powershell
:
# download a container for lighttpd, a lightweight HTTP server
docker pull gists/lighttpd
In a container spawned from this image, we need to expose Port 80 to access the web server.
We do this by mapping the container’s port to a port of the host
Port mappings have to be initialized at container creation time
-p
option maps a host port to the container portIn Powershell
:
# -d option runs the container in detached mode (background)
# -p 8888:80 maps host port 8888 to container port 80
# -v maps the host app directory to the web directory in the container
docker run -d --name lighttpd -p 8888:80 -v /c/Users/student/app:/var/www gists/lighttpd
# Check the mapped port in container listing
docker ps -a
Since we have a volume mapped, let’s author a simple HTML index file and drop it in the web root of the container. We should be able to browse to this page if the port mapping works as expected.
In a new Powershell
:
# Change directory to the `/app` directory on the host
cd app
# Add a simple HTML file exclamation marks
set-content index.html "<html>My first Container App</html>"
Now browse to http://localhost:8888
Return to Powershell
:
# Update the HTML file
set-content index.html "<html><h1>Cool</h1></html>"
Now browse to http://localhost:8888
Observations:
- Separation of persistent code from the application runtime
- Host file updates are instantly reflected in the container application
Let’s stop the container service and delete the container before we move on.
# Stop a running container named lighttpd
docker stop lighttpd
# Delete container named lighttpd
docker rm lighttpd
# List all containers (running or stopped)
docker ps -a
Typing long docker commands in a terminal is cumbersome 😖
Luckily, a Dockerfile
automates the build process. This is Akin to a “recipe” that the Docker engine understands.
Examine the Dockerfile
for gists/lighttpd
image that we pulled from Docker Hub earlier: https://github.com/iHavee/dockerfiles/blob/master/lighttpd/Dockerfile
Here is a reference for Dockerfile directives: https://docs.docker.com/engine/reference/builder/
# Start with this base image
FROM python:2.7.13
# Set environment variables
ENV PYTHONUNBUFFERED 1
# Set the working directory in
# which RUN and CMD options will run
WORKDIR /var/www/backend
# RUN commands run at container build time
# Used to install applications
RUN pip install Django
RUN pip install djangorestframework
RUN pip install markdown
RUN pip install django-filter
RUN pip install psycopg2
Let us clone a repository that includes the above DockerFile.
The DockerFile is typically in the top level project directory. For this we are going to use a recent web server demo we used this summer in a GenCyber camp. The server was built to interact with little IoT devices and show events gathered from the device.
In a new Powershell
:
# Switch to Desktop
cd ~/Desktop
# Clone the dev repository
git clone https://github.com/mlhale/nebraska-gencyber-dev-env
# Switch to the cloned repository
cd nebraska-gencyber-dev-env/
git submodule sync
git submodule update --init --recursive --remote
cd backend/
git checkout tags/step-10-server
In Powershell
:
# Switch to the cloned repo directory
cd ~/Desktop/nebraska-gencyber-dev-env
# Examine the DockerFile
get-content Dockerfile
In Powershell
:
# Use the `build` command and supply a DockerFile
# `-t` option provides a name and tag for the image
docker build -t django:dev .
# List local images
docker images
If the build is successful,
django
appears in your local image listing
Here is how to delete the image we just created.
In Powershell
:
# Use the `rmi` command and supply an image name
docker rmi django:dev
# List local images
docker images
If the command is successful,
django
is removed from your local image listing
Tip: To delete a container, use the command
docker rm container-name
Apps may require additional services in their environment. For example a Database Service
The docker-compose
tool automates building your app’s services all at once
and links them as described in a docker-compose.yml
file
docker-compose.yml
file:
# Compose file format version
version: "3"
# Declare services
services:
# Name of the Postgres Database service
db:
# Behavior upon container exit
restart: always
# Base image is postgres
image: postgres
# Per-service volume list
volumes:
- postgres-config:/etc/postgresql
- postgres-data:/var/lib/postgresql/data
- postgres-logs:/var/log/postgresql
- ./database-backup:/database-backup
# Name of the Django service
django:
# Use the Dockerfile to build this image
build: .
# Overide the default command
command: python /var/www/backend/manage.py runserver 0.0.0.0:8000
# Per-service volume list
volumes:
- ./backend:/var/www/backend
# Expose ports
ports:
- "80:8000"
# Link Django to Postgres
depends_on:
- db
# Declare named volumes
volumes:
# These keys are left empty to use docker engine defaults
postgres-config:
postgres-data:
postgres-logs:
The docker-compose
tool can build containers with a single command.
In a new Powershell
:
# Switch to project directory
cd nebraska-gencyber-dev-env
# build images
docker-compose build
# List local images
docker images
If the build is successful,
nebraskagencyberdevenv_django
appears in your local image listing. Services are built once and then tagged, by default asprojectname_service
Before running the built containers, often additional configuration steps are needed.
The steps below are specific to our setup and will vary with applications
In the previous Powershell
:
# run option executes a one-time command against a service
docker-compose run django bash
In the returned container
shell:
# Perform Django configurations
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser --username admin --email admin
exit
Back in the previous Powershell
:
# One simple command to start the entire application
docker-compose up
Navigate to http://localhost to examine the running app.
While pressing CRTL+C in the terminal once will shutdown the containers, here is a better way.
In a new Powershell
# Examine running containers
docker ps
# Gracefully shutdown the containers
cd nebraska-gencyber-dev-env
docker-compose stop
docker-compose down
command will shutdown and delete the containers. So be careful when using the down command.
Pretty neat. Observe your handy work.
as you take this quiz… https://www.qzzr.com/c/quiz/430097/the-container-quiz
For more information, investigate the following:
This tutorial was initially inspired by this blog post by James. Thanks to thoughtful comments and reviews by Dr. Matthew L. Hale
Modified for CYBR8470 by Matt Hale.
Nebraska GenCyber Overall content: Copyright (C) 2017 Dr. Matthew L. Hale, Dr. Robin Gandhi, and Doug Rausch.
Lesson content: Copyright (C) Robin Gandhi 2017.
This lesson is licensed by the author under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.