Wading through JAMStack - Fit the First

Some years ago, I built a (somewhat crummy) static site generator, whose sole purpose was to create a searchable display of my partner’s collection of 200 or so postcards depicting Morecambe in the early 20th century. Written in Clojure, it ingested a folder of images and a CSV spreadsheet of details about each card (along with supplementary notes), and spat out a folder containing resized images, HTML files, some JavaScript and a JSON index of the content for FTP to the server.

The site had the following features:

  • A front-page carousel which cycled through a selection of the cards

  • Full-text search of the content provided by Tipue Search, returning a list of matching cards with thumbnails

  • A detail page for each card, containing

    • Front and back images

    • A summary of title, publication date and publisher

    • Notes and comments

    • Tags for the card, linking to similarly tagged items

    • A Google Street View to show how the location looks now, where available

It was pretty painful to use, involving manual tweaking of menus etc. whenever the collection grew sufficiently. Now that it’s grown to more than 300 cards, I felt the time had come to implement something better. Since I had recently launched this site on Netlify, I wondered if the JAM stack, coupled with an existing static site generator, might offer a solution that would fit the bill. There was only one way to find out, and this sequence of posts will document my experiments in constructing the new site.

Some of Netlify’s features that were of particular interest were:

  • Support for Git Large File Storage with image resizing on demand

  • Integration with FaunaDB, offering the prospect of GraphQL querying without my having to implement and host a server with (e.g.) Lacinia

  • Serverless function integration via AWS Lambda, with the possibility of writing these in ClojureScript.

  • Authentication for site administration functionality

For the static site generator, I’ve chosen Cryogen, because Clojure.

My first task is to migrate the existing data into the form I need. I’ve got a folder filled with the original scans of the postcards, and the directory listing looks like this:

...
106 A Morecambe Trawler (back).jpeg
106 A Morecambe Trawler (front).jpeg
107 At Rest, Morecambe Bay (back).jpeg
107 At Rest, Morecambe Bay (front).jpeg
108 Excurson Platform (back).jpeg
108 Excurson Platform (front).jpeg
109 Esplanade & Winter Gardens, Morecambe (back).jpeg
109 Esplanade & Winter Gardens, Morecambe (front).jpeg
10 Fairy Chapel, Heysham (back).jpeg
10 Fairy Chapel, Heysham (front).jpeg
110 Bathing and Paddling at Morecambe (back).jpeg
110 Bathing and Paddling at Morecambe (front).jpeg
...

Ideally, I want the structure to look more like this:

.
├── 106
│   ├── back.jpg
│   └── front.jpg
└── 107
    ├── back.jpg
    └── front.jpg

The JSON index of the site that I mentioned earlier contains all the image filenames along with the numerical indexes, so I’ll gain consistency in naming of the files without losing any information. This renaming operation seems like an ideal task for Babashka. Here, I’ve used a regular expression with capturing groups that pull out the index number and whether the image is a front or back scan, and copied the existing files into a new folder structure using source and destination directories passed in on the command line:

rename.clj
(defn copy-file
  [[old-name idx front-back source dest]]
  (let [in (io/file source old-name)
        out (io/file dest (str idx "/" front-back ".jpg"))]
    (io/make-parents out)
    (io/copy in out)))

(let [[source dest] (map io/file (take 2 *command-line-args*))]
  (.mkdir dest)
  (->> source
    (file-seq)
    (map #(.getName %))
    (map #(re-matches #"(\d+)[^\(]+\((\w+)\).jpeg" %))
    (filter identity)
    (map #(copy-file (conj % source dest)))))

With Babashka installed, the following command will now do the restructuring of the postcard image directory:

bb rename.clj old-postcard-dir new-postcard-dir
# (nil nil nil...

In the next instalment, I’ll deal with migrating the JSON index to FaunaDB.

Last modified: 4 May 2020