Monday, June 29, 2015

Old Photographs, Love, and Internet Comments

As you may have noticed, there is no "comments" section on Amphibian.com. After viewing one of my delightful comics there is no place for you to voice your opinion on it. It probably wouldn't cause very much trouble if I allowed comments. After all, I have the comments turned on here and there's never been an issue. My comics don't really get in to a lot of controversial topics and neither does my blog. You can debate me on my code style I suppose.

But I'm sure you may have noticed that most news sites have comments after the articles. National news agencies, local television news sites, newspapers, even NPR. I would like to suggest that these all be disabled, immediately.

I don't know why Internet comments sections seem to give people license to say the ugliest, most insensitive, and unintelligent things. Is it the relative anonymity of the medium? Or do the biggest jerks mankind has to offer just gravitate towards news article comments like flies to decaying carcasses of roadkill? Would these people say these same things in person?

Alright, I may be in a bit of a bad mood and I'm being too hard on the comments sections. I just read an NPR article on the subject of one of Mary Ellen Mark's photographs from 1990. Mark died recently and was a prolific photographer specializing in what might be best described as very sad pictures. She was known for, among other things, her images of homeless children that appeared in Life magazine. After her death, someone tracked down the subject of a particularly haunting image to find out the rest of the story. You may have seen the picture already as it has been making its rounds on social media.

Go read the article now. Here is the link: What Happened To The 9-Year-Old Smoking In Mary Ellen Mark's Photo? You can even read the comments at the bottom. In fact, you should. Stare into the cold.

...waiting...

Are you back now? Good. The story of that girl is quite sad. Despite how she acted around the time of the photograph, she knew that she wanted help. She wanted a way out and hoped somehow the photographer could provide it. I'm glad she survived, but her life seems only marginally better today. Did the photographer fail her? No, Mary Ellen Mark's job was to take pictures.

You know who did fail her? All of us. Especially the callous, arrogant, self-absorbed commenters at the end of the story. But while reading through all of the comments you occasionally encounter a glimmer of hope. There are a few people who seem like they may be legitimately concerned with how our society deals with children like this. Those people only become targets for ridicule on the news page, unfortunately. I won't use the comments section, but I'll respond here. I know they'll probably never read this - but someone else might and it might do some good.

What Can Be Done?


The sad fact is that there are thousands of children today in the same condition as the girl in that old picture. Their stories might end up quite the same way. When their homes get too bad, they might get picked up by county workers and placed in foster care. Much like what happened to the photo subject, they will be bounced around to multiple families before ending up in a group home. There they can pick up lots more self-destructive habits before being released from the system at age 18. At that point they can end up homeless, or possibly in jail, or in abusive relationships. It happens. All the time.

So here's what you do. Become a foster parent. Put aside any glamorous notions of getting happy, well-adjusted kids who are eagerly waiting for loving parents - the commercials on TV don't show you the reality of foster care. Take in older children - over the age of 10. These are the ones most at risk. They're going to have emotional baggage. Expect it. They'll act out. They'll do terrible things. Love them anyway. When they start getting too close to you, they'll act out worse. They'll be afraid of getting hurt and will try to push away anyone who appears to care for them. This is normal. Love them more. Love them unconditionally. I'm not talking about some kind of soft, let-them-do-whatever-they-want, no-consequences kind of love. You need to set clear boundaries and be prepared to discipline when those boundaries are overstepped. But this is just part of love. When the foster care agency and the county workers come to you and say it will be best to put them in a respite for a while or try a placement in a different home or put them in a group home, tell them no. Sending the kids away at this point only reinforces what they believe about themselves - that bad behavior will get them sent away and that they are too bad to be accepted and loved. Show them that this isn't true. Love them more. They will get worse. Love them more. You will question your sanity. Your friends and family will question your sanity. You will question everything else. They will get worse. Love. Them. More.

They will get better. Slowly at first. There might be setbacks, but there will be progress. And maybe in the end, the child grows up with some security, stays off the streets and out of jail and someday forms positive relationships.

Of course, don't be too hard on yourself if it doesn't work out with every child. Do your best and don't give up. You can't help everyone but odds are that you can help someone.

That's what you do.

It's not for the weak. Loving people this way requires strength. It's not something you take on casually. Loving people this way requires a commitment of everything you have. It might sound like a Huey Lewis & The News song from 1985, but love is the most powerful force in the universe.

Think about it. Somewhere out there today is a girl in a situation just like the one that girl in the picture was in 25 years ago. Maybe you could make sure her next 25 years turn out better.

Or maybe you just want to comment on news articles on the Internet.

Amphibian.com comic for 29 June 2015

Friday, June 26, 2015

Node XML Parsing

XML. Why doesn't it stand for eXtinct Markup Language? It's one of those unavoidable unpleasantries of software development. Sooner or later, you're going to have to deal with it.

Since starting to work with Node, I've been able to ignore XML for quite a while. Most JavaScripty things use the much nicer JSON format for their data. However, this past week I finally had break down and get my hands dirty with XML in Node. It wasn't that bad...

I resorted to this because I wanted to pre-process the SVG images that I upload in the Amphibian.com editor. When I create SVG images with Inkscape, they never have viewBox attributes - which are absolutely necessary for correct display in Internet Explorer. I got tired of adding them manually before uploading and decided that the system should just do it for me. Because the SVG image format is just XML, all I need to do is parse the document and look for the viewBox attribute. If it's not there, I can create it using the values of the width and height attributes.

There are two main ways of dealing with XML. One is using a SAX parser. SAX parsing is basically event-driven and is most useful when you need efficient read-only access to XML documents. The other way is using a DOM parser. DOM parsers build Document Object Models out of the XML and therefore enable random read/write access at the cost of having to store the entire document in memory. Because I need to alter the XML documents, I selected a DOM parsing approach.

I am using the xmldom package. I selected it primarily because it is a native JavaScript implementation. Some of the XML parser packages for Node have library dependencies which make life difficult when you run on multiple platforms like I do.

Here is some example code that does the same thing my web app does - parse a SVG, look for some attributes, potentially add a missing one, and output the SVG. This example reads the SVG from a file instead of from a web form upload.

var xmldom = require("xmldom");
var fs = require("fs");

var DOMParser = xmldom.DOMParser;
var XMLSerializer = xmldom.XMLSerializer;

var svgString = fs.readFileSync("frog.svg", { encoding: "utf8" });

var svgDoc = new DOMParser().parseFromString(svgString);
var root = svgDoc.documentElement;

var svgWidth = root.getAttribute("width");
var svgHeight = root.getAttribute("height");
var svgBox = root.getAttribute("viewBox");

if (svgBox === "") {
    root.setAttribute("viewBox", 
       "0 0 " + svgWidth + " " + svgHeight);
}

console.log(new XMLSerializer().serializeToString(svgDoc));


And this is an example of the SVG XML in the file frog.svg. I cut out most of it to save space here - the only part this example cares about is the root element.

<svg xmlns="http://www.w3.org/2000/svg" width="276" height="281">
    ...
</svg>

Let's break down the steps here. First, require the xmldom object on line 1. That is used to get DOMParser and XMLSerializer objects on lines 4 and 5. The DOMParser is obviously the parser, but the XMLSerializer is what you need if you intend to dump an XML DOM back out to a string.

Line 7 reads the contents of the SVG file into a string. DOMParser MUST have a string to function, NOT a buffer. Specifying the encoding when reading forces the return value from readFileSync to be a string, but other steps may be needed if you are getting the SVG via another mechanism.

Line 9 parses the SVG string into a document. In DOM language, the root node, svg in this case, is known as the documentElement. I set that to a variable named root merely for convenience on line 10.

Lines 12, 13, and 14 are where I read the values of the various attributes. The interesting thing about the xmldom package is how it handles missing elements. Note that on line 16 I have to check to see if the svgBox variable is the empty string instead of null. If the attribute is not present in the document, getting it returns and empty string. I found this to be counter-intuitive as a null would seem to make more sense. But it is what it is...

Line 18 calls setAttribute on the root element to add the viewBox if it was not found. Line 21 uses the XMLSerializer to create a string out of the (possibly modified) SVG DOM and logs it to the screen. If you try it yourself, you should see that the viewBox attribute is in fact added if not present in the original document.

So I processed a little XML with Node and it wasn't a completely terrible experience. It was actually just as nice as any Java XML DOM parsers I've used recently. Probably nicer. The best part is now I can save myself all that time spent manually adding in viewBox attributes to frog pictures. While we're on that subject, read and share today's comic:

Amphibian.com comic for 26 June 2015

Wednesday, June 24, 2015

Responsive SVG

I mentioned the other day that I was working on an update for caseyleonard.com that included even more full-screen frog images. Much like the current site, I want to use SVG images and have the frogs scale right along with the browser window, for a smooth responsive effect.

And as usual, Microsoft Internet Explorer has to ruin the party.

Instead of using <img> tags for the frogs this time, I am going to embed SVG markup directly in the HTML of the page. This has been possible for quite a while (IE support began with version 9) but is not often seen on "normal" web pages. Any page with my name on it will be far from normal.

You can try this yourself if you have some SVG lying around. It's best to use minified SVG (see last Wednesday's post) so your pages don't get too terribly large. Here is an example with the actual SVG stuff blanked out to save space:

<!doctype html>

<html lang="en">
<head>
  <meta charset="utf-8">
  <title>Responsive SVG</title>
</head>

<body>

  <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 276 281">
    ...
  </svg>

</body>

</html>

By including a viewBox attribute to set the aspect ratio of the image and not including width or height attributes, good browsers like Chrome and Firefox scale the image to the maximum width of the container. Bad browsers like Internet Explorer assume a fixed height of 150 pixels and scale the width to create an image with the appropriate aspect ratio. Huh?

Properly scaled frog, in Chrome

Improperly scaled frog, in Internet Explorer

It turns out this is a known issue with IE, and it easily correctable with some CSS. First, wrap the SVG element in a <div> of class container. Then add the following CSS to the page:

.container {
    width: 100%;
    height: 0;
    padding-top: 102%;
    position: relative;
}

svg {
    position: absolute;
    top: 0;
    left: 0;
}

The value of container width is 100% because I want the frog to be as big as the window. You can use other values if you want your image to be smaller. The value for padding-top is calculated based on a formula and will be different for every image. To get the percent for padding top, do

( ( svg height) / (svg width) ) x (container width)

So in my example, the height of the frog image with my desired aspect ratio (from the viewBox) is 281 and the width is 276. I divide height by width and then multiply by 100 to get a value of 102. I use that for the padding-top percentage. Another look at the page in IE shows the correct result, and the page looks unchanged in Chrome.

Correct this time, Internet Explorer

And that looks much better! It's still a shame that IE makes us do all this extra work. And speaking of extra work, in today's comic the frogs take server hardening a little too far.

Amphibian.com comic for 24 June 2014

Monday, June 22, 2015

I'm No Rock Star

With a new baby and three other children in my home, it's been difficult for me to do any coding projects outside of my day job lately. It's difficult just to make sure the comics are there every Monday, Wednesday, and Friday. It's difficult to wake up in the morning.

So there might not be a lot of good programming-related content on here for a little while. Maybe there never is any good programming content. Today's comic once again pokes fun at the need for organizations to hire mythical software developers that can perform 10 times the work of "regular" software devs like myself. I'm no Rock Star Programmer. But then, who is?

I spent a lot of time here.
Maybe Tim Sweeney. I would have to say that his 1991 game ZZT is one of the main reasons I am a software developer today. I spent uncountable hours not only playing that game, but also using the bundled editor to create my own game worlds. I never saw a need for games to use anything other than Code Page 437 for their graphics.

Code Page 437 - The only graphics you'll ever need
Many of my early programming projects were making my own game engines like ZZT with features I had wished for in Sweeney's game. It inspired me to write more software. When I started writing BBS door games in the early 90's, I dreamed of making an online multi-player game like ZZT using ANSI escape codes delivered via 14.4Kbps modems. Unfortunately, I had to go to high school and stuff most of the day and I never really got very far with it. I had to settle for making a few dollars off of in-game modules for Legend of the Red Dragon. The lesser-known sequel to LoRD ended up being extremely close to my vision, but never saw the kind of popularity of the original.

ZZT has always stuck with me as a gaming ideal.
  • Why don't all games come with editors for making your own worlds? (I've had to seek out some unauthorized editors like Lunar Magic in the past, but Super Mario Maker looks like a winner)
  • Graphics don't matter as much as player engagement. ZZT's ASCII graphics were already dated when it came out. Who cares?
  • Community matters. People shared their own ZZT worlds on BBS's. It wasn't a "forced" community like the Facebook-integrated games of today.
That's enough of my ramblings and nostalgia for now. Check out today's comic. As usual, it asks the tough questions. Like why do we hire Rock Stars instead of Country Music Stars? I know why we don're hire Classical Music Stars (they've all been dead for hundreds years and are therefore less than desirable team members).

Amphibian.com comic for 22 June 2015

Friday, June 19, 2015

Numbers Aren't Always Numbers on iOS

A weird bug was brought to my attention yesterday concerning the Pivot comic from back at the end of May. If you read them regularly, you will remember that it was the one that had some frogs and a speech balloon spinning around (like a record, baby). The frog spinning was just an animation but the speech balloon rotation was an effect produced by changing the CSS transformation scaleX between 1 and -1. It creates the illusion that the balloon is rotating on a horizontal plane.

Based on a timer, I simply add or subtract a fraction of the scale value every few milliseconds. It worked fine on most platforms, but on iOS Safari there was an anomaly for small values very very very close to zero. A number like -0.000000000000005793976409762536, for example.

Because of the way numbers work in JavaScript, converting a number like that to a string value that can be used in a CSS property ends up looking like "-5.793976409762536e-16". And that's what Safari doesn't like. Apple even investigated it and everything. The official response was that it doesn't parse as a valid transform value. To fix it, I just call .toFixed(5) on the number before I put it in the CSS scaleX property. Five decimal places is plenty of precision, and it ensures that I never get the e thing in there.

But technically, I think it should have worked the way it was. According to the W3C specification, the value of scaleX (and other transforms as well) should be a real number - which can be expressed with the exponent notation. Okay, I realize the CSS Transformations is just a draft...so I guess I'll go easy on Apple this time (even though 75% of the editors are Apple employees!).

But interesting tie-in...today's comic also focuses on speech balloons. Well, technically on a sub-category of speech balloons - thought balloons. Have you ever thought about the history of speech balloons? Neither had I, before this evening. However, it turns out that balloons or bubbles showing the words spoken in a painting or drawing can be traced back hundreds of years.

Before the 18th century, speech was often indicated by strips or bands coming out of people's mouths, such as the one depicted here:


This was difficult to read, and rather limiting. By 1896, what might be considered modern speech balloons had started to be used with The Yellow Kid in the comic Hogan's Alley. The Yellow Kid is considered to be the first American comic strip character because of his recurrent appearances in the strip. Within a few years, other comic authors adopted the style and here we are today.

Hopefully, someday people will look back at the innovative things I've done with web comics and say how I introduced such pivotal devices to visual story-telling. It could happen. Help me out by reading and sharing today's comic:

Amphibian.com comic for 19 June 2015

Wednesday, June 17, 2015

Optimizing SVG Files with Node

The comics for today and Monday have been scaling jokes, but there's nothing funny about Scalable Vector Graphics. Well, there probably is but I can't think of anything at the moment.

Due to the fact that I have a new baby at home (daughter number 4!) I haven't had as much time to work on code projects, but I have been trying to make some updates to my personal website whenever I have a few minutes. Much like my comics, I want to make use of SVG images of frogs on it. Really big, in-your-face frogs. Frogs that scale perfectly even on retina displays.

The downside to using SVG images is that they can easily become bloated. Sure, if you want a really large image it's almost always better to use an SVG instead of a "normal" image format. And since they are just XML text they compress well. But Inkscape puts a ton of extra stuff in them that doesn't really need to be there - removing all the unnecessary information can make the files even better!

There are some web-based tools for cleaning-up SVG files, but the ones that I tried were not able to process my frogs without destroying them.

But then I found SVGO, the SVG Optimizer for Node.

It is super-configurable, modular, and can be run from the command line or included in an application. I installed and tested it from the command line:
$ npm install -g svgo
$ svgo frog.svg frog.min.svg
This image is the original file, frog.svg.

This one is 15k
...and this image is the optimized file, frog.min.svg.

This one is 5k
See any differences? I don't. And the optimized one is only 5k, whereas the original was 15k.

When I get a few more minutes, I intend to embed it in the workflow of adding or updating images for the comic. Once I do that, all SVGs that I create in Inkscape will be automatically optimized when I upload them before they are stored in the database. This should improve the download times for my comic as well as the render speed since there is less XML for the clients to process.

Not that the comic takes that long to load now...the server does gzip all the files before sending them as long as the client supports it. See for yourself by viewing the comic!

Amphibian.com comic for 17 June 2015

Monday, June 15, 2015

Using File Drop in Web Pages

Don't Litter - Drop Files in the Right Place!
When I'm making some of the more elaborate comics (such as the fire alarm from Friday or the agile dodgeball game) I like to work out the JavaScript on my test server here on my local network. But sharing the actual comic data (positions of frogs, text bubbles, etc) was always a pain. I would copy and paste JSON from the production server into a SQL statement for my local server or vice-versa. I decided that I should make an "import data" feature directly in the editor.

It is certainly easy enough to put a text area on the screen and let me copy-and-paste in a big JSON string. But while I was doing it, I thought, "Hey, I should just be able to drop a text file in here and have it auto-populate the text area from the file contents."

And so that's what I did.

It's not really that difficult thanks to the File API stuff that's been in JavaScript for a while now. Here is a sample web page that has a single text area on it.

<!doctype html>

<html lang="en">
<head>
  <meta charset="utf-8">
  <title>File Drop</title>
</head>

<body>

  <textarea id="drophere" style="width: 200px; height: 100px;">drop a file here</textarea>

</body>

<script src="https://code.jquery.com/jquery-1.11.2.min.js"></script>

</html>

To allow dropping text files in the text area, the following JavaScript is used.

$(function() {

    $("#drophere").on(
        "dragover",
        function(e) {
            e.preventDefault();
            e.stopPropagation();
        }
    );

    $("#drophere").on(
        "dragenter",
        function(e) {
            e.preventDefault();
            e.stopPropagation();
        }
    );

    $('#drophere').on("drop", function (evt) {

        var e = evt.originalEvent;

        if (e.dataTransfer) {

            if (e.dataTransfer.files.length) {

                evt.preventDefault();
                evt.stopPropagation();

                var file = e.dataTransfer.files[0];

                if (file.type != "text/plain") {
                    console.log("wrong file type");
                } else {

                    var reader = new FileReader();
                    reader.onload = function(fevent) {
                        var txt = fevent.target.result;
                        $('#drophere').val(txt);
                    }
                    reader.readAsText(file);

                }

            }

        }

    });

});

Since I use jQuery, everything is wrapped in a function that will be called as soon as the document is fully ready. Before setting up the actual drop handler, there are two other event handlers that should be registered to prevent undesirable browser behavior.

The first, on line 3, is the ondragover event handler. This event fires constantly when an element is being drug over a drop target. All the event handler does here is prevent the default behaviors of the browser, which in the case of a text area is to move the cursor around where the drop will take place. That isn't needed in my case because I plan on replacing the entire contents of the text area when the drop occurs.

The second event handler (line 11) is for the dragenter event. This event fires once when the element being drug first enters the drop zone. Again, I am just turning off the browser default behavior in here.

The next and final event handler that I register is for the drop event. This is where the good stuff happens. Because jQuery's event object wrapper doesn't really have direct support for the dataTransfer element, the first thing I do here is get the original event object from it. That's the object I will be using for most of the subsequent processing. First I check to make sure that there is a data transfer associated with this event and that the list of files in that transfer is at least one. If those two checks pass, I once again turn off event propagation and the default browser behavior. Remember, the browser will typically load any file you drop on a page as a new document - definitely not what I want to happen!

The next step is to get the file from the list of files in the data transfer and check the type. For my purposes, I only want to accept files that are plain text. It wouldn't make sense to drop an image or something in a text area! Assuming that the file type checks out, I can finally read the contents of the file. On line 36 I create a FileReader and then set the onload event handler. This is the function that will be called with the file data (or possibly an error) once the read is complete. It will be passed an event object, in which the text can be found in the target.result field. Once this function is set up, I just call readAsText and pass in the file (line 41).

Inside the onload function, line 39, is where I set the value of the text area to the contents of the text file. You could just as easily send the file contents directly to the server at this point or do some other processing, This technique will work on other kinds of files as well - instead of reading as text you could read as a binary string or an array buffer or a data URL. See the documentation for more info!

Give my demo a try for yourself and see how convenient it is to drop text file contents in text areas. You'll probably want to add this feature anywhere you have a text area on your own web pages.

And now the obligatory link to today's comic!

Amphibian.com comic for 15 June 2015

Friday, June 12, 2015

Free SSL Certificates

On Wednesday I talked about how I added SSL (but actually TLS) support to Amphibian.com in response to user requests. One thing that is always a problem when trying to secure a public web site is the high cost of certificates signed by "real" certificate authorities.

In order for you to not scare away your users with dire browser warnings, any secure certificate served by your web site must be signed by an authority that ships with the browser. While it is possible to install new authorities, it is not something that 99% of people would ever do. That means you're stuck paying a yearly fee to someone for keeping your site secure.

Unless you use StartSSL.

Yes! StartSSL provides FREE server certificates with 1-year terms and is a trusted authority in all common browsers. The catch is that it can be a little tricky to request and use them.

There are three steps to the process. First, you need a certificate installed in your browser that StartSSL can use to verify that you are who you claim to be. Go to https://www.startssl.com and click on the button in the upper right that looks like an ID card and some keys.

Click on the Keys to Begin
 Then click on the link to sign-up for an account. After filling out your name, address, phone number, and email address, click "Continue" and they will email you a verification code that you need to type in on the web page.

Fill out this form with your contact information.
Don't navigate off the page until you get the code and enter it! After that, they'll generate a key and install it in your browser. Now you can authenticate yourself with their system and move on to the next step.

The second step in this process is to use the Validations Wizard to validate your domain. Click on the Validations Wizard tab and then select "Domain Name Validation" from the drop-down box.

You want Domain Name Validation
On the next screen, enter the domain name for which you desire validation. StartSSL will read the WhoIs record for that domain and offer to send a verification code to one of the email addresses listed as contacts.

Enter your domain name. No www nothing, just the domain.
If you really control this domain, you should be able to get the email for at least one of them. Assuming that is true, get the validation code from your email and enter it on the next screen. Congratulations, you just verified your domain and you can move on to step 3.

Now go to the Certificates Wizard tab and select "Web Server SSL/TLS Certificate" from the drop-down list.


The next step will be to create the private key. This is the the one you want to keep to yourself! Enter a password for it and continue to the next step. It will show you the key in a text area. Copy and paste it into a file called ssl.key and click Continue.


The next screen has you select the domain for which you are generating this certificate. A drop-down box will show you the list of all the domains that you have verified (that was the second part of this 3-part process). After that, you have to supply a single subdomain. The certificate will be good for both. Most people use "www" for this. After one final confirmation screen, they are ready to generate your certificate.

If everything went well, you should be given another text area with your certificate in it. Copy and save it to a file named ssl.cert or something with the domain name in it so you remember which one it is for.

Now you should have a private key and a certificate signed by a legitimate authority. You're all set, right? Well, almost. I know I said there are only three steps but there is maybe kinda one more. You see, the certificate is actually signed by a StartCom intermediate authority, not the ultimate root. Some platforms, such as Android devices, don't trust the intermediate already and will reject the certificate when served from your website. The solution is to make a complete key chain using the certificate from the intermediate signer combined with your server's certificate.

First, go to the Toolbox tab and then click on "StartCom CA Certificates" on the left side. The certificates for the intermediate servers will then be available for download.


My certificate, since it is one of the free ones, is signed by the Class 1 Intermediate Server. Download that certificate and then concatenate it together with your ssl.cert file.

cat ssl.cert sub.class1.server.sha2.ca.pem > combined.cert

Now use the file combined.cert instead of ssl.cert for the server certificate in your web serving application of choice (Apache, Node, Nginx, etc) and even Androids will be happy.

I've done this a couple of times now for a few different domains and it works great. It is a maybe a little involved but it is free. In this case you get even more than what you pay for! Now have a look at today's comic - view it using transport layer security if you want!

Amphibian.com comic for 12 June 2015

Wednesday, June 10, 2015

Serving Your Express App With Encryption

While today's comic has nothing to do with encryption, you can actually view it (or any of my other content) using encrypted web communications. Most people refer to this as SSL, which stands for Secure Socket Layer, but SSL is an older and now non-recommended way of securing web traffic. The current method is called TLS and stands for Transport Layer Security. Doesn't make a whole lot of difference what you call it, it's the "s" in "https" when you're viewing a page with the reasonable assurance that no one is able to spy on your network traffic.

Don't mess with my network traffic!
Before last week, Amphibian.com had no way of delivering web pages in a secure manner. Why would it need to? It's just a web comic. But after my Bitcoin Paywall comic got so insanely popular I started to get people asking why I didn't have encryption enabled. Without it, there is a small chance that someone could inject their own Bitcoin address into the response from my server and take the money you are trying to send to me. So I decided to get a server certificate signed by a real authority (more on that Friday) and enable encryption for my comics.

This of course led me to create Monday's comic in which the frogs are surrounded by danger when viewed via http://amphibian.com/177 but are much safer when viewed via https://amphibian.com/177.

Now for the technical part. In order to get my Node/Express app to serve web pages over both secure and insecure web ports, 443 and 80 respectively, I had to make a few changes.

For simple apps, you are probably familiar with listening on a port this way:

var express = require("express");
var app = express();

// ... set up routes ...

var server = app.listen(3000, function() {
    console.log("listening on port %d", server.address().port);
});

But if you want the same Express instance to handle the traffic on multiple ports, you have to use the Node http module directly to create servers and pass in the Express instance as a parameter:

var express = require("express");
var http = require("http");
var app = express();

// ... set up routes ...

var server = http.createServer(app).listen(3000, function() {
    console.log("listening on port %d", server.address().port);
});

You could of course use this technique to listen on multiple insecure ports, like 3000 and 4000 as in this example:

var express = require("express");
var http = require("http");
var app = express();

// ... set up routes ...

var server1 = http.createServer(app).listen(3000, function() {
    console.log("listening on port %d", server1.address().port);
});

var server2 = http.createServer(app).listen(4000, function() {
    console.log("listening on port %d", server2.address().port);
});

However, if you want one of the listening ports to use encrypted communications you need to use the Node https module instead of http for one of them. In the simplest possible configuration it takes just one extra parameter - an options object which contains at a minimum the private key and public certificate for the server.

var express = require("express");
var http = require("http");
var https = require("https");
var fs = require("fs");
var app = express();

// ... set up routes ...

var server1 = http.createServer(app).listen(3000, function() {
    console.log("listening on port %d", server1.address().port);
});

var sslOptions = {
    key: fs.readFileSync("/path/to/private.key"),
    cert: fs.readFileSync("/path/to/server.cert")
};

var server2 = https.createServer(sslOptions, app).listen(4000, function() {
    console.log("listening securely on port %d", server2.address().port);
});

And just like that you can enable encrypted communications in your Express web app. Now read today's comic where the frogs try to avoid upsetting the apple cart...

Amphibian.com comic for 10 June 2015

Monday, June 8, 2015

Daughters++

While I was planning on writing about using Transport Layer Security in a Node + Express application to go along with today's comic, something else came up.

I went to the hospital this past Thursday with my wife for the birth of our fourth daughter. Since I was there until late afternoon on Saturday and sleep-deprived the whole time, blogging about encryption for web communications will have to wait until later in the week.

But here I am with the newest member of my family:


So enjoy today's comic without a technical discussion, and be sure to look at both secure and insecure versions!

Amphibian.com comic for 8 June 2015

Friday, June 5, 2015

Using Noise to Generate Game Maps

There is absolutely no technical tie-in to the comic today. It's just tree puns. Sorry. But since trees are part of the landscape, I thought this might be a good time to share some of what I'm working on with noise and map generation.

Yes, noise. Noise can generate random game maps. Not noise like that new Google Chrome plugin thing that sends URLs to people nearby using your computer speakers (interesting, but odd), noise as in Perlin noise.

Perlin noise is a procedural way to generate pseudo-random textures that appear more natural than those created with alternative methods. It was invented by Ken Perlin in 1983 while working on the CGI for the movie Tron.

I learned about this just over a month ago at a local Meetup group. If you consider that the map in a computer game is like a texture, Perlin noise can be used to easily generate terrain which seems realistic. I want to use this is a side project I'm working on for Amphibian.com but I haven't had much time to experiment with it until this week. I now have a JavaScript utility that generates the noise so I can generate randomized maps both in the client browser and in my Node.js server.

I am using a noise generation library, perlin.js, by Joseph Gentle. It is based on Stefan Gustavson's public domain implementation and can be found here on GitHub: https://github.com/josephg/noisejs

Here is a simple example of it in action.

<!doctype html>

<html lang="en">
<head>
  <meta charset="utf-8">
  <title>Perlin Noise</title>
  <style>
    body {
      margin: 0px;
    }

    div {
      height: 40px;
    }

  </style>
</head>

<body>

  <div id="map"></div>

</body>

<script src="http://code.jquery.com/jquery-1.11.3.min.js"></script>
<script src="perlin.js"></script>
<script>

var width = 33;
var height = 17;

noise.seed(Math.random());

for (var y = 0; y < height; y++) {

  var row = "<div>";

  for (var x = 0; x < width; x++) {

    var value = noise.simplex2( y / height, x / width );

    var e = "grass";
    if (value > 0.5) {
        e = "grass2";
    } else if (value < -0.5) {
        e = "water";
    }

    row += "<span><img src='" + e + ".png'/></span>";

  }

  row += "</div>";
  $("#map").append(row);

}

</script>

</html>

I'm building a simple map out three possible tile images: grass, rough grass, and water. They are just 40x40 squares which are either green, green with darker green lines, or blue.

grass.png
grass2.png
water.png
I am looping over the height and width values and generating a noise value for each cell of the grid. The noise value will always be between -1 and 1. The "default" cell is grass. If the noise value is over 0.5 then that cell becomes rough grass. If it is under -0.5 it becomes water. I am just throwing the images in span tags wrapped in div rows for the purposes of this example. Try it yourself and see what it looks like. Hit refresh on your browser to generate a different random landscape each time. They always look fairly natural with the way the water and rough grass group together and intermingle with the plain grass tiles.

Look at a few of the ones I generated.




Now if I could just randomly generate some comics. Fifty percent of the time, they'd be funny all of the time.

Amphibian.com comic for 5 June 2015

Wednesday, June 3, 2015

The Amphibian.com Software Stack

The work of a full-stack engineer. Delicious.
The comic today referenced a number of popular software solutions stacks, before realizing that the CEO frog really just wants someone to make stacks of pancakes. Instead of hiring a short-order cook, try listing the job as "full stack engineer" and see who applies.

If you're not familiar with them, here they are:

LAMP
  • Linux Operating System
  • Apache HTTP Server
  • MySQL Database
  • Perl / PHP / Python Programming Language
MEAN
  • MongoDB
  • Express.js
  • AngularJS
  • Node.js
ELK
  • Elasticsearch
  • Logstash
  • Kibana
These "stacks" get catchy acronym names because they are so often used together, but there's nothing really binding about them. They're also not consistent in which parts of the system they name. For example, the MEAN stack still has to run on an operating system (probably Linux) but that part is left out of the name.

To make Amphibian.com, I didn't use one of these named stacks.

Here is my Frog Stack:
  • Linux
  • Express.js
  • Node.js
  • MySQL
It just doesn't have a nice acronym. LENM? I could re-arrange the words. MELN? ELMN? I could leave the operating system out, I guess. MEN? I could be more specific with my operating system and say Ubuntu instead of just Linux. Then I could have MUEN, NUME, MUNE, EMUN, or ENUM. I started the process of switching from MySQL to Postgres, so maybe someday I could run a LENP stack. Or a PELN? NELP?

Those are all terrible.

Why did I select Ubuntu Linux? Mainly because it's free and I like it. I initially selected MySQL because I've used it in a lot of projects over the years and I was familiar with it. I want to switch to Postgres now because I store mostly JSON and want to use the new features related to JSON storage in Postgres 9.4. I chose Node and Express because I wanted to start building applications with Node to learn about it. Express, I learned, was just the most popular web framework for Node. So I learned that as well. And it's been a good experience. I really like building applications with this set of components, even if it doesn't have a cool name.

I would like a full stack of pancakes now.

Amphibian.com comic for 3 June 2015

Monday, June 1, 2015

Easy Node Debugging

Today's comic is just silly. A frog's debugger is his tongue, since it is a very effective tool for removing bugs. Or maybe it just relocates them - directly to the frog's stomach. If you want to use a debugger with your Node.js applications, you'll want something better than a tongue.

I will admit that I was slow to adopt the use of a debugger for runtime code inspection. Early on in my software development career, when a program wasn't working the way I thought that it should, I just put in some code to print stuff to the console and restarted. That's when my applications were small and didn't take long to recompile and restart. By 2003, I was working on large Java EE systems where the process of rebuilding and redeploying took too long for System.out.println(...) to be an acceptable method for finding bugs. I actually got pretty good with the Java command-line debugger, but after seeing the Java debugger in the Eclipse IDE I gave up Emacs as my primary development environment and switched. I have no regrets.

Today I also do a lot of JavaScript development for Node in addition to my work with Java, but have been so far unimpressed with the debugging capabilities for Node integrated into Eclipse. I use Nodeclipse and for development purposes I am fairly happy, but the debugger left me wanting something else.

I believe I found it with Node Inspector. The idea behind it is pretty cool - debug your JavaScript applications with the JavaScript debugger with which you are probably most familiar: the Chrome Dev Tools!

Here's how it works. first install Node Inspector as a global module.
$ npm install -g node-inspector
Then instead of launching your Node app the "normal" way, do this instead:
$ node-debug app.js
As long as Chrome is your default browser (come on, why shouldn't it be?), the debugger window will launch as a new browser tab which will look like the Developer Tools window. Awesome!

Node Inspector debugging one of my demo apps. Using the Chrome Dev Tools in Chrome!

There are a few things to note here. First, you'll want to be using a version of node greater than or equal to v0.11.13. Older versions work but the debugging capabilities are limited. Second, since the debugger is also running in the browser, if you are debugging a web app you might find yourself needing to switch tabs all the time. You can pop the tab out into a separate window, but it's still a third thing you need to deal with in addition to your IDE and the application you are testing. Not a huge problem, but it is something to consider. Maybe it's enough to make you finally add that third monitor to your development workstation...

Or if you want to get rid of bugs the old-fashioned way, try a frog.

Amphibian.com comic for 1 June 2015