overview

Article 1: Creating a static web page that is hosted on S3

2022-02-05

Goal

Well in order to start a blog, you need something that you can show on the web. The easiest things that you can put into the web is a single html page without any styling. To not get lost in layout/design in the very first step, I really wanted to just throw out some content. There are already a few things that we have to do:

  1. write the content into an html file
  2. be able to test the page locally
  3. put it on S3 and make it publicly available
  4. get a domain, so that we can be found easily
  5. summary

Implementation

1. write the content into an html file

HTML was invented to create structured documents that can be accessed via the internet. If you want more information head over to Wikipedia or Google. Browsers know how to interpret specific elements called 'Tags' and how to display them. The basic structure of an html looks like this:


<html>
 <head>
  <title>Title that is displayed in the browser tab</title>
 </head>
 </body>
  <h1>Headline</h1>
  <p>paragraph</p>
 </body>
</html>

Html files consist of a tree of blocks. The outermost block is the html block, which denotes that the data in this file is an html document. A block is denoted by a start tag like<html> and an end tag like </html>. Our html block has two children as the head and body block are called. The head block contains the pages title. Itusually also holds references to other resources to be loaded (e.g. javascript or css files). Thebody block contains the visible content of our page. Amongst other block we use h1to create a headline or the p block to create paragraphs in our page. If you want to find outmore about the other tags I used in this document, go visit W3Schools HTML tag list

2. Be able to test the page locally

In order to be able to test the page locally in a proper way, you need a web server. Some browsers have problems displaying pages from the file system (e.g. path to a file on your disk). So I decided to spin up nginx locally using docker and use it a web server that serves static web pages. We won't dive too deep into the topic, because there are quite some elements involved, that would need a lot of explanation. But I'll provide a short guide to get it running.

  1. Install docker desktop First we need to install docker desktop on your computer. You can find the download and instructions a here
  2. Run nginx without any configuration Now we want to run nginx without any configuration just to see if it works. Execute this from your terminal: docker run --name mynginx1 -p 80:80 -d nginx Don't forget to start docker before running this command. If you go to your browser and type in http://localhost you should see the default nginx start page.
  3. Configure nginx to serve pages from the content folder

    Of course it's not very exciting to just show the default nginx start page. In order to change this we need to properly set up nginx. For now we just want to show the static content from our content folder, so we will stick with the default configuration. All files in'/usr/share/nginx/html' will be served as static content. So we will run the default nginxdocker image and mount our content folder at the static content location. This is how to run it:

    docker run --name nginx_fanderl_rocks --mount type=bind,source="$(pwd)"/content,target=/usr/share/nginx/html,readonly -p 80:80 -d nginx

    --name nginx_fanderl_rocks: name of the docker container on your local computer

    --mount type=bind,source=./content,target=/usr/share/nginx/html,readonly: this statement tells docker to mount the local folder ./content, which holds our static web pages, tothe folder that holds static html files by default in nginx.

    -p 80:80: this statement tells docker to publis the port 80 through the docker host. This makes it possible that you can type localhost into your browser and seethe nginx default start page. Port 80 is the default port for http, the default web protocol

    -d: tells docker to run as detached process, thus will not respond to stdin anymore. It will also not block your terminal.

    nginx: tells docker to use the nginx docker image from docker hub

  4. Test page in your local browser If you enter http://localhost/ in your browser you will see the page from the content folder rendered in your browser. Notice that the filename after the domain resembles the filename of the file in order content folder. Something worth noting is that just typing localhost into some browsers will not work, because the default protocol for most browser has changed from http to https and we did not have set up serving pages using https. https is the encrypted version of http. The default configuration for nginx also allows us touse index.html as the root page of our homepage. So http://localhost/ andhttp://localhost/index.html will show you the same content.

3. Put it on s3 and make it publicly accessible

In order to make our webpage accessible from the public internet, we need to make our files publicly available. For this purpose I choose to go with a very simple approach, which is putting files onto AWS S3 (Simple Storage Service). If configured correctly and it can serve static webpages like ours, without the need for a dedicated server.

Infrastructure as code

Setting up your infrastructure quickly and in a reproducable fashion has become the industry standard through the past few years. Thanks to cloud services that have awesome APIs, this has become increasingly easier in the last years. Since we use AWS there are a few options that we could use to setup our resources:

  1. Cloudformation Cloudformation is the most basic way that Amazon provides to provision resources in AWS. Cloudformation resources are organised in stacks, where each stack is described by a template written in json or yaml.
  2. AWS Cloud Development Kit AWS CDK builds on top of cloudformation. Aside from the basic cloudformation resource called L1 constructs, it offers L2 constructs. L2 constructs are L1 constructs with a usefull set of sensible defaults, so most of the L2 constructs are easier to use. There are even higher level components called patterns, which use multiple constructs from the other levels to complete even bigger tasks.
  3. Terraform Terraform is a cloud agnostic tool to manage resources. It's similar to cloudformation in what it does, but it can handle multiple clouds. You can even pass parameters from one cloud to the other. It's also not restricted to cloud, you can use it to deploy/manage anything.
  4. AWS CLI AWS CLI is a command line based tool to access AWS. There are also approaches where people manage their cloud resources using AWS CLI via shell scripts.

For this blog I will go with AWS CDK. I personally think having sensible defaults is very valuable. I'm also not the biggest fan of terraform, because I like to have the state of the deployment tool inside the cloud I'm using. It's a great tool when you have to deal with managing things across clouds or maybe even on premise. Since we will only do AWS for now and I want to learn AWS CDK we will stick with that.

  1. Installing the AWS CDK I used this guide from AWS to setup AWS CDK. I did not install it globally, as I'm not a big fan of bloating the global nodejs installation. Instead I created the cdk folder that will just hold the installation of the cdk. That way it will be found by all the CDK apps that will be hosted inside this folder.
  2. Creating infrastructure CDK app By using cdk init infrastructure --language typescript we create a new CDK stack that will hold our infrastructure
  3. Create a stack for our public S3 bucket This guide explains how to setup static website hosting using S3. It says that we need two S3 buckets that are publicly accessible from the internet. Those S3 buckets must match the domain name we want to have. This is the resulting stack defined in our CDK application:
    export class FanderlRocksStack extends Stack {
      constructor(scope: Construct, id: string, props?: StackProps) {
        super(scope, id, props);

        const subdomain = 'www';
        const domainName = 'fanderl.rocks';

        const root_bucket = new s3.Bucket(this, 'fanderl_rocks_domain_root_bucket', {
          versioned: false,
          bucketName: domainName,
          websiteIndexDocument: 'index.html',
          publicReadAccess: true,
          removalPolicy: RemovalPolicy.DESTROY
        });

        const subdomain_bucket = new s3.Bucket(this, 'fanderl_rocks_subdomain_bucket', {
          versioned: false,
          bucketName: `${subdomain}.${domainName}`,
          websiteRedirect: {
            hostName: domainName
          },
          removalPolicy: RemovalPolicy.DESTROY
        });
    }
  4. Upload the content to S3

    In order to be able to access our files from S3, we have to upload them to the bucket. There are several ways to upload files to an S3 bucket. The simplest one is probably to use the AWS CLI. You can upload files by executing the following statement from the root folder of this repository:

    aws s3 sync content/ s3://fanderl.rocks

    The other method I stumbled across when searching how to setup S3 buckets with CDK, is to use CDK to upload files. I chose to use this approach, because there's nothing extra we need to maintain. CDK is a little bit slowish, if you need to do it often. But I think it's good enough as a start and I want to give it a try. Here's the code from our stack:

    new s3Deployment.BucketDeployment(this, 'fanderl_rocks_static_website_content', {
      sources: [s3Deployment.Source.asset('../../content')],
      destinationBucket: root_bucket
    });

    Now we can test that our page can be accessed through the public internet via its S3 URL - https://s3.eu-west-1.amazonaws.com/fanderl.rocks/index.html.

4. Get a domain

In order to not have a cryptic IP or S3 specific URL we want to get a domain, so that users can just type http://fanderl.rocks into their browser and find our website. In order to do this, we need to register the domain for ourselves and pay a little fee, so that someone redirects us to our website, whenever somebody types the above URL into the browser. The protocol behind this is called DNS - Domain Name System. Here's what I did to register the for the content in S3:

  1. Buy the domain The first step is to buy the domain and secure it for yourself. This is pretty much straight forward and something that can't be done with CDK. So I will just post this AWS guide that shows you how do this in the AWS console.
  2. Add DNS ARecords so that users will be redirected to our S3 bucket to our CDK stack An ARecord is an entry in the registry of the DNS provider that tells the Browser a specific IP address, when queried for the domain. One can also register subdomains like e.g. www.fanderl.rocks. const hosted_zone = route53.HostedZone.fromLookup(this, 'fanderl_rocks_hosted_zone', {domainName});

    new route53.ARecord(this, 'fanderl_rocks_root_a_record', {
      zone: hosted_zone,
      target: route53.RecordTarget.fromAlias(new targets.BucketWebsiteTarget(root_bucket))
    });

    new route53.ARecord(this, 'fanderl_rocks_subdmain_a_record', {
      zone: hosted_zone,
      recordName: subdomain,
      target: route53.RecordTarget.fromAlias(new targets.BucketWebsiteTarget(subdomain_bucket))
    });

5. Summary

Holy, that was a lot of stuff to do and digest. But finally we arrived and have static content delivered on our own custom domain. That's something that we can be very proud. So let's quickly go to fanderl.rocks and have a relaxing cup of coffee that we earnt ourselves.

Impressum - last commit - cce12992063f24143bf7995cce2f0d3ee92136db