This article is, in essence, a documentation of the steps taken to create this website, a web 1.0 style collection of hyperlinked static webpages. When I first decided to create this site, I already had a number of domains registered through Namecheap, and I've been very happy with their services, both as a registrar and domain name system (DNS) provider. I also knew that I wanted to use Simple Storage Service (S3) solution provided by Amazon Web Services (AWS) to host the files that would constitute the website. The problem with these two starting points is that while AWS a documentation article, Hosting a Static Website on Amazon S3, as well as a specific guide in the documentation on Setting up a Static Website Using a Custom Domain, the documentation presupposes that you will be using Amazon's Route 53 service for DNS management. This left me with the problem of figuring out how to configure my DNS records so that they would play nice with S3 and enable the philologia.io domain to correctly point to the corresponding S3 bucket. This article serves as a guide to solving this problem, in case others are struggling with these kinds of hosting configuration issues.
In solving this problem, there are two distinct components to be addressed: (1) configuring the S3 bucket to host a static website, and (2) configuring the DNS records to point to this S3 bucket.
To arrive at the final solution of a correctly configured S3 bucket, there were 4 subtasks to be performed:
For step one, the name of the S3 bucket must match this structure:
www.[domain-name].[top-level-domain]
So, in my case, this had to be:
www.philologia.io
To facilitate queries both including and not including 'www' (e.g. http://www.philologia.io and http://philologia.io), the AWS documentation instructs you to create two buckets, one with 'www.' and one without (e.g. www.philologia.io and philologia.io). As it turns out, this is entirely unnecessary (there is a very good Stack Overflow response which proposes a two-bucket solution for S3 hosted websites with non-AWS DNS providers, but this bucket structure is ultimately overcomplicated without reason). I created only one bucket for this website, and both URLs resolve against the bucket location and data fine. There is a DNS record responsible for this, which will be discussed in the following section.
During the bucket creation process, there is one non-standard option that must be selected. In the "Set Permissions" section, select "Grant public read access to this bucket" from the "Manage public permissions" dropdown menu. This will throw a warning message that "everyone in the world will have read access to this bucket." Under normal circumstances, you'd want to heed this warning, but since we're configuring this bucket as a website, we want everyone in the world to be able to see it.
Once the bucket has been created, subdirectories should be created within it to define the structure of the website's constituent files. Then, the files themselves can be uploaded into the bucket.
When uploading files, the "Grant public read access to this object(s)" option must be selected from the "Manage public permissions" menu in the "Set permissions" section. As with the creation of the bucket, this will throw a warning which can be safely ignored for the same reasons.
Once the files have been uploaded to the S3 bucket, configuration changes need to be made within the bucket's Properties tab.
From here, we see the Static Website Hosting tile which gives us access to the bucket permissions which enable it to function as a static website.
Within this tile, the option to "Use this bucket to host a website" should be enabled. And, file names for the index and error documents must be input.
With this done, we can move on to the bucket's Permissions tab, to complete the configuration changes needed for the bucket.
From here, select the Bucket Policy tile, to upload an S3 bucket policy to fully define access permissions.
The file that must be pasted into this section is a specific piece of JavaScript Object Notation (JSON) code. The key thing to note here is the value in the "Resource" key. The value here will be the Amazon Resource Name (ARN) for the bucket. The '/*' at the end is particularly important as well, since this applies the policy to all files within the bucket, making the entire site publicly available.
Once that policy value has been saved, all of the configuration changes needed on the S3 side of things will have been complete.
With all of the website's files uploaded to a properly configured S3 bucket, all that remains is the DNS configuration. The locations where these configuration options can be made will vary from provider to provider, but since DNS itself is provider-agnostic, the records which need to be added are universal across providers. For those using Namecheap, the first step is to select the specific domain that's going to be used for the website.
From here, select the Advanced DNS tile, to pull up the DNS configuration options.
In this section, there are two DNS records that must exist: (1) a CNAME record which points to the S3 bucket, and (2) a URL Redirect record which points to the 'www.' version of the domain name.
For the CNAME record, the host must be set to 'www' and the value will be drawn from the Endpoint URL given in the Static Website Hostings dialog window during the S3 bucket creation process (see above screenshot). To get the CNAME value from the S3 endpoint, simply remove the 'http://' from the beginning and add a '.' character to the end. For the importance of the terminal period in the CNAME record, see this article. The host in the URL Redirect record must be a '@' character, and the value should be the full URL of the page, including the 'http://www.' prefix and a terminal '/' character. What this record accomplishes is to redirect any requests made to the raw philologia.io domain to http://www.philologia.io. And, since the CNAME record maps http://www.philologia.io to the configured S3 bucket, both http://philologia.io and http://www.philologia.io effectivly map to one S3 bucket, saving the trouble of having to create, configure, and manage two S3 buckets for the same website.