Monday, February 16, 2015

Being Awesome

In today's comic, the frogs of today and of ancient times are using pictures instead of words to convey meaning. The problem is that sometimes the meaning is lost. My children know that the picture of a floppy disk means Save, but they have no idea what a floppy disk is.

Despite this problem, I do like to use small pictures on web pages to represent certain actions. If you'd like to do this as well, be sure to check out Font Awesome (if you haven't already). I used to create small images myself to use on web pages in <img /> tags for this purpose, but doing this with fonts and CSS makes even more sense. When the image is a scalable, vector-based character from a font, you can make it any size/color/rotation you need.

Using Font Awesome can seem like magic. You just put an empty tag in your HTML with a couple classes applied to it and it renders as a picture of something. The following example is the bomb!

<i class="fa fa-bomb"></i>

There is no actual magic involved, however. The CSS that accompanies the Font Awesome font makes use of the ::before pseudoelement to add a character of text to your markup. The character added depends on the class and maps to one of the images in the font.

Pseudoelements let you do all kinds of crazy things via CSS. Like ::before, there is also an ::after. They allow you to add text or images (or nothing) to the page, but the additions don't actually become part of the DOM. That last part makes them a little difficult to work with sometimes, but they are still useful. Here's a weird example:

a::after {
    content: " (don't click on this!)";
}

That will add a dire warning to every link on the page. I didn't say it was a useful example. I said weird.

If you'd like some useful examples, check out CSS-Tricks. But don't forget about Font Awesome if you just want to put some icons on your pages now. And also don't forget to read today's comic.

Amphibian.com comic for 16 February 2015

Friday, February 13, 2015

My Lucky Day

Today (the date of this blog entry's publication) is Friday, the 13th day of February 2015. Don't be alarmed, Friday the 13th is nothing of which to be afraid.

I've told people for years that Friday the 13th is my lucky day, which is the exact opposite of how most people in the United States see it. The reason I say that is because my wife and I got our marriage license and bought our first home on Friday the 13th. That was September 2002. Just a few years ago, in 2013, we had another Friday the 13th in September. There'll be another one in 2019.

I'm not at all superstitious and I don't often encounter someone who is. Most people I know associate Friday the 13th more with a series of horror movies than with actual bad luck. Apparently, the belief that Friday the 13th is unlucky arose in the 19th century when superstitions concerning Fridays and the number 13 were combined. Sort of like when chocolate and peanut butter were first combined, but less delicious.

I'm not sure why people didn't like Fridays. As the last day of the work week, we tend to favor them now. But the number 13 has been long-despised because it is one more than 12 - the number of completeness (or so they say). But seriously, the number 12 is everywhere. Months of the year. Hours in a day (unless you use a 24-hour clock). There were 12 tribes of Israel. Jesus had 12 disciples. Twelve is a popular number. Seven was also a popular number, but then seven ate nine and is now regarded more as a number to be feared.

So, back to 13. It is a prime number, and prime numbers are useful for public key cryptography and pseudorandom number generators. And pseudorandom number generators are used in computer games, where luck can be a factor.

However, don't confuse luck with randomness. Lots of things are random, but luck is subjective. If you believe today will be your lucky day, it will be. It's all in your perspective.

Keep in mind that the 13th day of the month actually occurs on Fridays more often than any other day of the week. Seriously, There's math to prove that. Also, there will be three Fridays the 13th this year. This is just the first one. There'll be another one next month and then again in November.

Well, enough about Friday the 13th. Don't forget that the 14th is Valentine's Day! Read today's comic to see if the frogs can help some algorithms find love.

Amphibian.com comic for 13 February 2015

Wednesday, February 11, 2015

What Does Pac-Man Smell Like?

Would This Smell Fruity?
Today I was talking to a friend about path-finding algorithms for how frogs in a game could find their way towards certain objects, and he suggested that I research "Pac-Man smell" as an example of a simpler method.

But then I began to wonder, what would Pac-Man smell like? For some reason, I theorize that he would smell like those candy hearts you get on Valentine's Day with the little messages written on them. I don't know why I think that, but I do.

That aside, I did look up the Pac-Man Smell system for path-finding. It is interesting enough that I thought I should share it. While smell is not mentioned at all in The Pac-Man Dossier, perhaps the definitive write-up on the game, Neal Ford talks about it in his book The Productive Programmer. When he discusses Anti-Objects, he uses Pac-Man as an example (see page 147 in the book).

Basically, the deal is that objects and object hierarchies, common in Object-Oriented Design, actually make some kinds of problems harder than they need to be. If you were to create a game like Pac-Man in Java, it's likely that you would have objects to represent Pac-Man, the ghosts, and the dots. So how would you make the ghosts chase Pac-Man? Obviously you'd put a lot of smarts into the ghost objects. But the algorithms needed in that case are complex and certainly beyond the computational capacity of the original Pac-Man arcade game. With today's hardware, we could make something work but, like Scrooge McDuck would always say, "Work smarter, not harder." If you turn the problem around, it can be solved in a much easier way.

In Ford's Anti-Object example, he states that the Pac-Man developers put more intelligence into the maze itself in order to make the ghosts dumber and yet still able to chase Pac-Man. When Pac-Man occupied a cell of the maze, that cell increased its "smell" value by one. When Pac-Man left a cell, its smell decreased by one and then continued decreasing by one every few clock ticks. For a ghost to track Pac-Man, it could now contain simple logic: move towards the cell with the highest Pac-Man smell.

And that is the essence of Anti-Object design. Sometimes thinking outside the object is the best and most efficient way to create the desired behavior. I would summarize this approach like this: do what works best even if it deviates from a perfect model. After all, the people playing the game won't appreciate how well the software is broken down into a perfect Object-Oriented Design. To them, it just has to work.

I'm still thinking about the best way to direct my frogs around. But the frogs are thinking about eggs, at least today.

Amphibian.com comic for 11 February 2015

Monday, February 9, 2015

It's All a Blur

How many times have you wanted to blur all or part of a web page? If you're anything like me (which you're probably not because I'm a weirdo), it happens all the time.

I tried to do this the other day and learned that there aren't a lot of good options. There is hope, however, since Chrome supports CSS Filter Effects, which include blur. It's only supported by Webkit at the moment (since Chrome 18), but we know that features such as this tend to seep into other browsers over time.

Using the Webkit filter is easy. Just apply a style like this:

-webkit-filter: blur(8px);

And just like that, your page content gets blurry.

But I know, not everyone is using Chrome. Some poor misguided individuals are still using Internet Explorer. What can be done? The good news is that in many simple cases, a jQuery plugin can provide a polyfill for the missing blur feature.

I tried out Foggy, one such plugin. If used in Chrome, -webkit-filter is applied. Otherwise, it dynamically creates a bunch of copies of the element and makes each one slightly transparent and offset to simulate the blurring.

Here it is in action on my comic. All I had to do was include the jquery.foggy.js file and then

$("#cell-1").foggy();

To get this result:

Foggy applied to a comic cell in Chrome
Looks good, right? Yeah! But that was in Chrome, my browser of choice. Let's see what happens in Firefox...

Foggy applied to a comic cell in Firefox. Whoa!
Fail! Not only did it make this cell look weird, it actually screwed up the element locations in the previous cell too. Clearly, this isn't going to work for me.

Don't discount Foggy for your own projects just yet. When applied to "normal" text and images, like in their demo page, Foggy does produce correct results in Firefox and Internet Explorer. It might work for you, depending on what you are doing with it.

But one thing is clear, and that is the fact that the future is blurry. Don't forget to read today's comic, where we continue to see what happens when frogs don't think clearly. Plus, make sure you check out our current FREE STICKERS promotion. There's a link at the top of the comic page for details!

Amphibian.com comic for 9 February 2015

Friday, February 6, 2015

Now Working for Tips

I'm trying something new this week. In the past, you may have read my posts about how I have no Bitcoins and how I lament the fact that the Internet is all advertisements.

I may have found a solution to both of those problems in ChangeTip. ChangeTip allows you to tip content creators on the Internet using Bitcoin. My comics have Facebook "Like" buttons on them, but liking a comic doesn't help me pay for the web server. With the ChangeTip "Tip" button, you can easily give me a dollar or two if you like my work.

Here's how I use it. I added the "Tip" button to the bottom of my comic page. When you click on it, a popup gives you the option to send me a tip out of your ChangeTip account or directly with Bitcoin. While you can tip in US dollar amounts, the tips are all converted to Bitcoins in my account.

There's the tip jar.

But there's more to this service than a Tip button. I haven't tried it yet, but you can also tip people via any social network just by mentioning both ChangeTip and the recipient in a post. For example, If you and I have both connected our Twitter accounts to ChangeTip, all you'd have to do in order to tip me $1 would be to tweet "hey @THECaseyLeonard, here's $1 @changetip" and I'd get a dollar's worth of bits from your ChangeTip account sent to mine. There's similar behavior on Facebook.

This seems like an interesting concept and, in my opinion, a good alternative to advertising as a way to make a few dollars as a content creator. My comic runs no ads, but my blog here does have some banners on it. In a good month, I'll earn maybe $2 from them. If people like just one or two comics per month enough to tip, I could easily replace that revenue.

The other benefit is that the money is more directly related to how people feel about the content I create. If I make $0.15 from an ad click here, it's not because someone really liked this blog post - it's because Google showed them an ad for something in which they were interested. If someone sends me a $0.15 tip, it's because they liked my blog post. The tip makes me feel better about the work I'm doing.

While I think the Tip button is a positive addition to my site, it is not perfect. It fits in nicely with the other social media buttons, but when you click on it the controls appear in a floating IFrame whereas the others typically use a separate pop-up window. I wouldn't mind that so much, except that it always expands down and right, which throws off the rest of the page - especially on mobile. Perhaps a mobile-optimized view would be in order.

I do feel a bit like the guy playing the saxophone in the subway station waiting for passers-by to toss change in a hat, but he and I have a lot in common I suppose. I guess I'll see what happens. It is a relatively new service, and my comic doesn't have a large readership, so I don't expect tips to start pouring in right away. Hopefully, though, the concept catches on and this service or one like it can usher in a new era of how the Internet is funded.

Amphibian.com comic for 6 February 2015

Wednesday, February 4, 2015

I Promise

Not This Kind of Promise
I've been working with Node for a while now. I wrote the whole application for my web comic using it as a way to learn something new and make something real at the same time. I like to be practical like that. One thing that I've noticed, as I'm sure anyone who's worked with JavaScript for more than 20 minutes has, is the tendency for large numbers of nested callbacks to create the Pyramid of Doom.

Sounds like you need to carry a whip and wear a fedora if you're going in there.

No, it's really just code like this. It grows sideways faster than it grows down.

call1(function(data) {
    call2(data, function(moreData) {
        call3(moreData, function(evenMoreData) {
            call4(evenMoreData, function(finalData) {
                // finally got what I wanted
            });
        });
    });
});

There is an alternative to this approach, called Promises. I just recently started replacing some of my callback-style code with Promises to see how it works.

A Promise, according to the Promises/A+ spec, is a representation of the eventual result of an asynchronous operation. The primary way of interacting with a promise is through its then method - where callbacks are registered to receive the eventual result of the operation or the reason why it failed.

Yeah, I had no idea what that meant the first time I read it either. I learn by experimenting. So I looked at some examples and then tried to convert one of my callback-style functions to use a Promise instead. Here's what happened.

First, I had to get a Promises library. Promises aren't supported natively in Node, so I used the Q module. It implements the Promises spec.

Here's the first function that I modified to use Promises, before I modified it.

function listImages(cb) {

    db.query('SELECT filename, type FROM comic_img', function(err, rows) {
        if (err) {
            cb(err);
        } else {
            var data = [];
            if (rows.length > 0) {
                for (var i = 0; i < rows.length; i++) {
                    data.push({
                        filename: rows[i].filename,
                        type: rows[i].type
                    });
                }
            }
            cb(null, data);
        }
    });

}

It's just one of many functions in my comic's data access module, the one that returns the list of all available images that can be used in a comic. As you can see, it implemented a callback model, where a function to be called (the callback) was supplied as the argument. The database is queried, and the resulting data is returned to the callback. As is typical for callbacks, the first argument sent to the callback function is any error that may have occurred or null if everything worked. The second argument will be the actual data which the caller was requesting.

Calling the listImages function in my web application originally looked something like this:

listImages(function (err, data) {
    if (err) {
        next(err);
    } else if (data) {
        res.setHeader('Content-Type', 'application/json');
        res.send(data);
    }
});

Now let's look at the function after I converted it to use Promises instead of callbacks.

var q = require('q');
function listImages() {

    var deferred = q.defer();

    db.query('SELECT filename, type FROM comic_img', function(err, rows) {

        if (err) {

            deferred.reject({
                message: 'database query failed',
                error: err
            });

        } else {

            var data = [];
            if (rows.length > 0) {
                for (var i = 0; i < rows.length; i++) {
                    data.push({
                        filename: rows[i].filename,
                        type: rows[i].type
                    });
                }
            }

            deferred.resolve(data);

        }
    });

    return deferred.promise;

}

The differences are not that extreme. On line 1, I have to require the Q module of course. Internally the function still makes the call to the database the same way, but it doesn't require a callback function to be passed in as an argument. Instead it creates a deferred object there on line 4 and returns the deferred.promise at the end. That's different - the original version didn't return anything. If the database query fails, I call deferred.reject and pass in an object describing the error (line 10). If everything works, I call deferred.resolve and pass in the data. But the database call happens asynchronously - whoever called this function got the promise returned to them and has to use that for handling the eventuality of data or failure. So here's how the caller changed:

listImages().then(function(data) {
    res.setHeader('Content-Type', 'application/json');
    res.send(data);
}, function(error) {
    next(error.error);
});

Calling listImages now returns a Promise, and the primary way of interacting with Promises is through the then method. The first parameter to the then function is a function to be called when the Promise is resolved, taking the data that was produced (what I passed to the deferred.resolve function above). The second parameter is a function to be called if the Promise is rejected and taking an object representing the reason for rejection (the error description object I passed to deferred.reject).

So there is my simple example of using Promises instead of the callback pattern to deal with asynchronous operations. There is a lot more that the Promises specification offers and much more you can do with the Q library. I'm going to keep working with it see what happens. I might post more later. In the mean time, you can read this comic about frogs in the Silicon (Dioxide) Valley.

Amphibian.com comic for 4 February 2015

Monday, February 2, 2015

Can We Get To The Bottom Of This?

If you followed the link here from today's comic, you may have just experienced an infinite scroll. If not, I'm sure you've experienced it on Twitter, Facebook, or Pinterest. No matter how far down on the page you go, more stuff keeps getting added to the bottom. You can never reach the end!

Obviously, the comic today invokes the stuck-in-a-loop element from the Bill Murray film Groundhog Day as well as the tendency for web pages to scroll forever these days. The "invention" of the infinite scroll web page was a response to the rise in mobile web access. It is more natural to keep moving your thumb to access more information than it is to touch "previous" or "next" buttons.

If you want to add the infinite scroll element to your own web application, be careful. As a design paradigm, it doesn't work for all situations. Consider what type of information is to be displayed. Will all data be of equal relevance, or will the most important be near the top? Is the data on a timeline? Facebook and Twitter show you the newest items at the top (more or less) and the more you scroll, the farther back in time you go. They make some exceptions to that in the case of conversations because most people want to read the beginning before the end. Also, consider how navigation is affected. In my comic, the links to this blog, the social media buttons, and the previous/next navigation links are at the bottom of the page. I had to make special accommodations for today's comic in the form of a fixed-position panel at the bottom than can be shown and hidden. You could have a similar issue if your navigation is at the top and your user has scrolled 4800 feet down on the page. One final issue is that if your dynamically-added content waaaaay down on the page is a link to something else, the user might be disappointed when the use the back button and don't really go back to the same place they left.

If you're okay with all those issues and want to make your own page scroll forever, it is very easy to do with jQuery. Here is an example of what I did for my comic.

$(function() {

    var addPoint = 300;

    $(window).scroll(function(){

        if ($(this).scrollTop() > addPoint) {

            // get some new data
            // add it to your page

            addPoint += 800;

        } 

    });
 
});

My code sets up a function that will listen for scroll events on the window. The value returned from scrollTop() will be the number of pixels hidden from view due to the page scrolling down. Initially, I want to add more data when the user scrolls past 300 pixels. But as I add more data, the point at which I want to add more data will increase as well. Typically, the data you want to add will come from your server via an asynchronous call - so make sure you start the process before the user gets the whole way to to bottom to ensure you get it added in time.

If you want to address the navigation issue as well, I like to sometimes use the Affix tool that comes with Twitter Bootstrap. I make use of it in my comic editor even - though it doesn't scroll forever, there are some things I like to keep visible on the page at all times, no matter how far down I have to go. It's extremely simple to set up on your Bootstrap-enabled page.

<div data-spy="affix" data-offset-top="50">
    <div>
        <p>You can put navigation or whatever here.</p>
    </div>
</div>

But you also need some of your own styles to make it work. Here is some CSS to go along with the HTML code above.

.affix {
    top: 8px;
    left: 8px;
}

.affix-top {
    position: fixed;
    top: 110px;
    left: 8px;
}

As soon as the page loads, the Bootstrap JavaScript adds the affix-top class to your element. What that means is totally up to you - in my example I specify the affix-top class to mean the element has a fixed position 110 pixels from the top of the page and 8 pixels from the left. When the user scrolls down farther than the value given in the data-offset-top attribute, 50 pixels in my example, the affix-top class is removed and replaced with the affix class. Again, what this means is up to you. Bootstrap specifies "position: fixed" but nothing more. In my CSS, I specify the position as 8 pixels from the top and 8 from the left. That will keep it in view no matter how far the page is scrolled. When you scroll back up to the top, Bootstrap reverses the process and puts the affix-top class back on in place of the affix class.

If you're in the mood for a scroll that's a little more finite, there's always One Mile Scroll. Which is, you know, one mile. Slightly shorter than infinity.

Amphibian.com comic for 2 February 2015