Moving the website to Lektor

Years ago, I moved all of (except the blog, which runs on WordPress) from artisinally hand-crafted HTML to using a static site generator. At the time, I chose a project called “blatter” which used jinja2 templates to generate a site. This gave me the opportunity to change basic information across the whole site at once. Not something I do often, but it’s a pain when I do.

Unfortunately, blatter was apparently quietly abandoned by the developer. This wasn’t really a problem until Python 2 reached end of life. Fedora (reasonably) retired much of the Python 2 ecosystem. I tried to port it to Python 3, but ran into a few problems. And frankly, the idea of taking on the maintenance burden for a project that hadn’t been updated in years was not at all appealing. So I went looking for something else.

I wanted to find something that used jinja2 in order to minimize the amount of work involved. I also wanted something focused on websites, not blogs specifically. It seems like so many platforms today are blog-first. That’s fine, it’s just not what I want. After some searching and a little bit of trial and error, I ended up selecting Lektor.

The good

Lektor is written in (primarily) Python 3 and uses jinja2 templates, so it hit my most important points. It has a command to run a local webserver for testing. In addition, you can set up multiple servers configurations for deployment. So I can have the content sync to my local web server to verify it and then deploy that to my “production” webserver. Builds are destructive, but the deploys are not, which means I don’t have to shoe-horn everything into Lektor.

Another great feature is the ability to programmatically generate thumbnails of images. I’ve made a little bit of use of that for the time being. In the future, especially if I ever go storm chasing again, I can see myself using that feature a lot more.

Lektor optionally supports writing the page content in markdown. I haven’t done this much since I was migrating pre-written content. I expect new content will be much markdownier. Markdown isn’t flexible enough for a lot of web purposes, but it covers some use cases well. Why write HTML when it’s not needed?

Lektor uses databags to provide input data to templates. I do this using JSON files. Complex operations with that are a lot easier than the embedded Python data structures that Blatter supported.

If I were interested in translating my site into multiple languages, Lektor has good support for that (including changing URLs). It also has a built-in admin and editing console, which is not something I use, but I can see the appeal.

The bad

Unlike Blatter, Lektor puts contents and templates in separate files. This makes it a little more difficult to special-case a specific site.

It also has a “one directory, one file” paradigm. Directories can have “attachments”, which can include html files, but they won’t get processed, so they need to stand alone. This is not such an issue if you’re starting from scratch. Since I’m not, it was more of a headache. You can overwrite the page’s slug, but that also makes certain assumptions.

For the Forecast Discussion Hall of Fame, I wanted to keep URLs as-is. That site has been linked to from a lot of places, and I’d hate to break those inbound links. Writing an htaccess file to redirect to the new URLs didn’t sound ideal either. I ended up writing a one-line patch that passed the argument I need to the python-slugify library. I tried to do it the right way so that it would be configurable, but it was beyond my skill to do so.

The big down side is the fact that the development has ground to a halt. It’s not abandoned, but the development activity happens in spurts. Right now it’s doing what I need it to do, but I worry at some point I’ll have to make a switch again. I’d like to contribute more upstream, but my skills are not advanced enough for this.

Building my website with blatter

I recently came across a project called “blatter”. It’s a Python script that uses jinja2’s template engine to build static websites. This is exactly the sort of thing I’d been looking for. I don’t do anything too fancy with, but every once in a while I want to make a change across all (or at least most) pages. For example, I recently updated the default content license from CC BY-NC-SA 3.0 United States to CC BY-NC-SA 4.0 International. It’s a relatively minor change, but changing it everywhere is a real pain.

Sure, I could switch to a real CMS (heck, I already have WordPress installed!) or re-do the site in PHP, but that sounded too much like effort. I like my static pages that are artisinally hand-crafted slapped together in vi, but I also like being able to make lazy changes. And I really like page-to-page consistency. With blatter, I can create a few small templates and suddenly changes can be made across the whole site in just a few seconds.

Blatter smoothly merges static and templated content. The only downside is that because it seems to touch all files every time it builds (blats), pushing the new content to my website becomes a larger task. That’s not a huge concern because of the relatively small size of the content, but it’s something that seems fixable. So pretty much all of the site has been blatterized now. For the most part, you shouldn’t really notice any changes.

Bad security from

The problem with having my first initial and last name as my email address is that I get a lot of email from other people. The other day, I started getting messages from Not wanting to keep Barry from true love, I looked for a support address. There wasn’t one, so I tried going to the site by clicking the link. Imagine my surprise when I had full access to Barry’s profile.

Since I was in a good mood, I didn’t pretend to be Barry. I simply deactivated his account so that maybe he’d notice. The next day, I got another email saying a woman was interested in me Barry. Once again, I clicked the link to take me to the site. This time, I unchecked all of the notification options and put in a unused email address. I was interested to see what I could do if I were a bad person, so I looked at Barry’s profile information. He didn’t have much filled out, but he had his date of birth and his ZIP code. From which, I could use an online phone book to find his address and phone number.

That’s plenty of information for a social engineering attack. Considering he couldn’t enter his email address correctly, I’m willing to bet that if I called pretending to be from his bank, library, local police, etc. that I could get even more information out of him. Identity theft city, folks. It’s lucky for Barry that I’m generally a nice guy. Though it’s not really Barry’s fault that is stupid enough to allow unauthenticated access to user settings. I’ve tried to contact and have not yet received a response, so I’m opting to shame them publicly.

But ladies, if you’re looking for Barry, he’s not here.

Book review: Version Control by Example

A few weeks ago, I heard that Eric Sink was giving away copies of his new book Version Control by Example. Since I like free things and know just enough about version control systems (VCSs) to be dangerous, I figured I should get a copy. Turns out that was a wise decision. I use Subversion at work and Git with Fedora and personal projects, so I haven’t been able to get really good at either system. After reading this book, I’m still no expert but I’ve got a little more competence (and, more importantly, a handy reference).

As the title suggests, this book is centered around actual examples. In walking through Subversion, Mercurial, Git, and Veracity, Sink uses the same example scenario, making it easy to understand the similarities and differences between the systems. Although he clearly favors the distributed VCSs, the book gives Subversion a fair treatment, discussing situations where a centralized VCS is more appropriate (for example, when it’s necessary to have path-based access controls.

The best feature of Version Control by Example is the writing style. Much like the “for Dummies” series, the writing style is light and humorous. This makes it a very easy book to read through, and certainly aids my focus. The only downside to this book is that it lacks a detailed treatment of advanced topics. Still, as an introductory book this is excellent. Given that Sink seems insistent on not making any money off this book, I encourage anyone who uses version control in any capacity (or anyone who doesn’t but should!) to have a copy. Details on the free book offer can be found at

Status of the Internet

I’ve been meaning to do this since the Comcast DNS issues a few weeks ago, but I’ve finally put together a quick page with links to the status pages of various web services and sites.  You can check the status of the Internet at  It’s a bit surprising how many sites don’t have obvious status pages.  It makes sense for popular sites to have a separate server for status information that users can find when they’re having problems.  I don’t have one for FunnelFiasco because this isn’t a popular site.  If I ever get popular, wake me up from my faint and I’ll stand a status page up.  If any of my dear readers know of sites that have status pages, send me a link or leave a comment and I’ll get it added.

I also had the chance to improve my CSS for FunnelFiasco while writing this.  Over the summer, I was able to find out how to get rid of tables for pictures.  Today, I mostly copied that implementation for “text tables”, or content where I use a table-like format for displaying the data, but don’t need the rigidity.  There are still a few things to work out, but I’m pretty happy with it so far.  Now to backport those changes into other pages.

Lightning photos added

On Monday evening, Angie and I went chasing unexpectedly.  While the storm produced some wind damage, I’ve been unable to find confirmation of a tornado (there was at least one real-time report, though).  We saw very little of interest, until the incredible light show afterward.  So I present to you a few boring cloud pictures, plus also the lightning:

This is also kind of exciting because for the first time I’ve forgone the use of tables to layout the photos.  The result is that the page renders based on what your screen wants it to, not on what I demand it does.  This is supposedly a much less evil way to do things.  In the future, I’ll be converting some of the older pages to work this way as well.

Further, I’ve updated some of the pages to reflect new license information.  Instead of my custom written “you must have my permission” text, I’m now licensing content under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.  It’s simpler, more enforceable, and more in line with my own personal values.  There’s a blog post forthcoming where I muse upon licensing issues.  In the meantime, know that the content on is under whatever license it says it is. I’ll work on updating the license text on pages soon.  And maybe I should consider a more dynamic site (e.g. using PHP) so that I don’t have to keep making these changes on each. freaking. individual. page.

Perl’s popup_menu cares how you give it data

Last weekend when I was working on the script that mirrors and presents radar data for mobile use, I decided the less work I had to do, the better.  To that end, I tried to make heavy use of the Perl module.  In addition to handling the CGI input, also prints regular HTML tags, so you can avoiding having to throw a bunch of HTML markup in your print statements.  This makes for much cleaner code and reduces the chances you’ll make a silly formatting mistake.

Everything was going well until I added the popup menu to select the radar product.  Initially I followed the example in the documentation and it worked.  As I went on, I decided instead of having two hashes for the product information, it made sense to make my hash include not only the product description, but the URL pattern I’d be using when it came time to mirror the image.  Unfortunately, when I tried to make that change, my popup form no longer had the labels I wanted.

I kept poking at it for a while and finally got frustrated to the point where I decided I’d just write a foreach and have that part print the HTML markup instead of using functions.  Fortunately, I first talked to my friend Mike about it.  I sent him the code and after a little bit of working, he realized what my problem was.’s popup_menu function expects a pointer to a hash for labels, not an array (I’m not really sure why, maybe someone can explain it?).  Once that was settled, the script worked as expected and the remainder was finished in short order.

Sometimes, it really helps to pay attention to the data type that a function expects.

It’s just like Magick

Recently at work, I had to post images from our awards banquet.  With over 100 picutres at nearly 5000×5000, loading that page would take forever.  Clearly, this was a case for Captain Thumbnail.  What’s a thumbnail?  It’s a reduced-size version of an image, according to Wikipedia.  There are two main reasons to use thumbnails.  The first is to make it easier to quickly visually inspect a collection of images.  The second is to reduce the time it takes a page to download and render.

Some people just use HEIGHT and WIDTH in their HTML <img> tags.  That is very naughty.  Using that method doesn’t shrink the data size of the image, so people on slow connections still have to wait for the full image to download.  I learned this when I shared pictures of my first tornado with fellow chasers.  In order to prevent offending the bandwidth of Nealras again, I found a nifty little program called “Media Resizer.”  The free version allows you to resize images, add watermarks, etc.  What the free version does not do is allow you process a batch of images.

Re-sizing an image or two individually isn’t such a big deal.  Making over 100 thumbnails one at a time is the suck.  Knowing the GNU Image Manipulation Program (The GIMP) could resize images, and assuming that it would be scriptable somehow, I began Googling.  What I found instead was a nifty suite of tools collectively known as ImageMagick.

ImageMagick can create thumbnails, sure.  It can add effects, like torn photo edges.  It can convert text to images, or images to different formats.  It can probably do a whole lot more that I haven’t even explored yet.  The best part is that it is easily scriptable, much easier than scripting in The GIMP.  The bester part is that it seems to be included with most Linux distributions.  If you need to do something with an image, especially if you have a bunch of images, think ImageMagick first.  Very helpful examples can be found at:

The right way to build a web site

Today marks the 5th anniversary of my first tornado, an F1 that struck Jamestown, Indiana.  It was a very exciting day for me, for many reasons.  One of which is that I felt like I finally had some legitimate content to put on my website.  By 2004, I had been making web pages for about ten years.  My first efforts were with AOL’s Page Builder (I think that’s what they called it).  It was  really simple and probably had a total of zero visitors in its entire existence.  By the end of the 20th century, I had discovered Angelfire.  My page on Angelfire was simply a place to store some of the crappy songs and stories I had written.   I had no intention of making it widely visited.  It was mostly a text-based site, including some pages that were made with Microsoft Word (yuk!).

It would only get worse.  In the final year of the millenium, I began developing my page on Geocities.  My Geocities page was everything the Angelfire page was, plus pictures of me and some random photos I took while driving around with my friend Erik.  Oh yes, and it was everything that you’d expect a Geocities page to be:  tiled background pictures, all manner of animated GIFs, superfluous exclamation points.  It captured the spirit of the age as well as any site.

Things began to improve when I went off to college.  I began hosting my site on the University’s servers, which meant I didn’t have all of the begging-to-be-abused site-building tools.  I also had marginally-interesting content, so the GIFs mostly went away.  Development in that first year was mostly done on Microsoft FrontPage (also yuk!), which forced me into a simpler design style.  The best was yet to come, though.  In the spring of 2002, I was hired to maintain my department’s weather data server.  One of the first things I wanted to do was add more products to the website.  I picked up a copy of HTML 4 For Dummies: Quick Reference and taught myself basic HTML.  From that point on, my design tool of choice was a text editor.  Notepad, PC-PICO, vim, whatever was available that let me get my ASCII on.  To this day, the majority of remains hand-edited.

The next big step came in 2005 or 2006.  Our department printed a CD every year to recruit graduate students.  I was tasked with making some provided updates.  The first thing I had to do was to convert the content from a series of PowerPoint files (okay, seriously?  Who does this?!) into HTML.  With some guidance from our department’s webmaster, I found that Cascading Style Sheets would cut down on a lot of the redundant work (and would make future updates much simpler), so I found some material online and learned CSS.  When I began re-designing last year, I had to re-learn my CSS, but it was well worth the effort.

So why did I start this post out by mentioning the Jamestown tornado?  It just so happens that yesterday I was driving to the very spot where I saw the tornado develop.  On the drive, I had the opportunity to consider many things, but what I ended up considering was my website.  My storm chasing exploits seem like they’d fit on a blog-type site rather well.  Would it be worth moving them?  WordPress also has static pages.  Could I just make my entire site contained in WordPress?  Or maybe I could make the whole site dynamically-produced.  That might be cool.

After some consideration, I’ve decided that I like how my site works.  It’s not the best, or the prettiest, but it works for me, and I think it works for the people who happen upon it.  I may not be a web designer, but I think when you can sit back and say “this site works”, then you’ve done it right.

Why pay the difference if you can’t tell the difference?

I’m fortunate to have a hosting service that provides bandwidth on the cheap. For $20 per month, I get 100 Gigabytes of bandwidth. In one sense, it is good that I’ve never come anywhere near that limit. On the other hand, why put the effort into a website if no one is looking at it? As your site gets bigger (or gets the love from Fark, Digg, Reddit, etc) you face a choice: pay for more bandwidth or reduce the bandwidth usage of your site?

At first, reducing your bandwidth might be as easy as resizing a few images, but that only gets you so far. So what then? Make use of free bandwidth, of course! Have a few charts on your website? Use the Google Chart service. Have a large number of pictures? Host them on places like Picasa, Flickr, or Photobucket. All of a sudden, you’ve got a much bigger cushion, for not a penny more. As your site gets more popular (and hopefully starts bringing in some revenue), you can pony up the cash for more bandwidth, but why not save it while you can?