Friday, February 27, 2015

What My Users Are Using

Since I got a request to show some of Amphibian.com's user agent stats after Monday's post, I decided to put some information together and share it today. There are a few surprises.

These numbers are based on the agent strings for the last week. I had a pretty good week for traffic, due no doubt to my guest-authorship of Tavern Wenches on Monday.

First, let's look at the coarse-grain browser usage.
  • Firefox: 24.61%
  • Chrome: 29.78%
  • Internet Explorer: 15.38%
  • Safari: 1.97%
  • Other: 28.25%
What is "Other"?  Mostly just spiders and bots. There sure are a lot of them though. But over half of the traffic is people using Firefox or Chrome, which is great to see.

Now to break it down some more. What versions of the browsers are being used? I'll start with Firefox.
  • version 35: 36.64%
  • version 3: 30.31%
  • version 6: 15.99%
  • version 31: 7.66%
  • version 28: 2.41%
  • version 26: 1.08%
All other versions were less than 1% each of all Firefox traffic. It's no surprise that 35 is the most common, since up until 3 days ago it was the current version. What's really shocking is that version 3 is the next most common! It dates back to June of 2008. But it was also before the automatic upgrade process, which is why so many people are stuck on it. Leaks memory like mad though. Version 6 was also popular and came out before the forced updates, back in August of 2011. It didn't make the cutoff since it was less than one percent, but there was one comic accessed with Firefox 2. Whoever you are, please upgrade. The good news: Firefox has always had pretty good SVG support, so almost all of these people are seeing the frog comics more-or-less correctly. Even CSS3 Transforms, which I use relatively often, were supported the whole way back to 3.5.

Chrome is next.
  • version 40: 78.65%
  • version 39: 7.30%
  • version 38: 7.16%
  • version 36: 1.90%
  • version 35: 1.12%
  • version 30: 1.90%
All other Chrome versions were used less than 1% each. Thanks to Chrome's excellent automated update process, the vast majority of users are on the current version, released on 21 January 2015. There were 2 comics accessed from Chrome 41, which is currently in the beta channel. You are awesome, whoever you are. The oldest version of Chrome used was 10, released in March of 2011. Even that person got to see the frog comics almost perfectly. Chrome is my browser of choice and has always supported SVG graphics and CSS3 transforms.

Now how about those IE users?
  • version 6: 11.74% (fail!)
  • version 7: 1.66% (fail!)
  • version 8: 69.48% (fail!)
  • version 9: 1.66%
  • version 10: 9.53%
  • version 11: 5.94%
Booooo! That's right, IE 8 is still the most common IE used to visit Amphibian.com. Those folks are leaving disappointed. As are all those people using 7 and 6. Come on, people! IE 6?!?! Really? That browser's older than dirt! That browser nearly killed the web! Ugh. Anyway, the 17% of IE users who come with 9, 10, or 11 will be fine. The site looks presentable in 9 and should be perfect in 10 and 11. Internet Explorer's SVG support started in 9, but not all of the things I do with CSS3 are supported until 10. It degrades rather well though.

Safari doesn't show up enough to be significant, but almost everyone using it is on version 8.

The one other thing I like to look at is operating system. Since I went to the trouble of optimizing Amphibian.com for mobile devices, I'd like to see more mobile browsing of it. So far, though, I'm not doing so well in that area.
  • Windows: 58.98%
  • Mac: 4.48%
  • iOS: 3.25%
  • Android: 6.21%
  • Linux: 3.41%
  • Other: 23.67%
Again, "Other" shows up in a large number. It's the spiders and bots. Which is fine, I guess. One of the other design concerns with my site is how the frog's words are just P tags and can therefore be indexed by the search engines. But back to the real people - there are relatively few viewing the comics on mobile devices. Windows is by far the most popular. I am pleasantly surprised to see over 3% on Linux though.

Enough of the statistics now, back to the comics!

Amphibian.com comic for 27 February 2015

Wednesday, February 25, 2015

Achievement Unlocked

I had a major achievement this week - I guest-authored for Sarah Frisk's web comic Tavern Wenches!

My Guest-Authored Tavern Wenches
I'm happy to help out another webcomic, and I am excited because it makes me feel more like a "real" webcomic author. I mean, it's one thing to call myself a webcomic author because I can put frog graphics on a web page, but I must be a real webcomic author if another webcomic author believes me to be a webcomic author.

Drawing someone else's comic for them is a little terrifying. I didn't think of that before I volunteered, but afterwards I developed some anxiety. I hadn't been reading Tavern Wenches that long (I discovered Sarah's work through my participation in CodeNewbie's Twitter chats) and I was nervous that I didn't have enough background to do it justice. So I read over most of the site's archive a few times. Tavern Wenches had guest authors twice before, so at least I wasn't the first. But I also considered that as an established comic, the regular readers will most likely have expectations that my comic style might not be able to meet. Would people be disappointed with some silly puns and a frog? Finally, with the exception of a crayon sketch of my daughter here and there, I hadn't really drawn a human character in a looooong time. While I did once draw with pencil and paper (a comic strip about a frog that ran in a local newspaper for many years) all my work lately has been in vector graphics. I wasn't quite sure where to start drawing a person again. Could I really do this?

The Last Person I Drew
I wasn't sure. But after a day I decided to do my best and stick with what works for me. That meant Inkscape, frogs, and puns. Being a guest author doesn't mean you have to do everything the way the regular author does it. It's more about bringing your own unique style and perspective to someone else's comic world (at least that's what I decided). So I made an SVG of Veronica and had her do a double act with a frog I styled to appear more medieval. After I had everything arranged in Inkscape, I exported it to a .PNG and sent it to Sarah.

Despite my trepidation, I think it turned out pretty good. It was beneficial for me to work outside my comfort zone and draw a person again. I'm not saying that Amphibian.com will start having human characters, but I won't rule it out simply because I'm not sure that I can draw one.

So in addition to today's regular frog comic, I would encourage everyone to go check out Tavern Wenches - not just my guest comic but be sure to read Sarah's comics as well. I really like her art style. She also does a monthly comic called Monster Markup Manual which features monsters from Tavern Wenches helping to instruct people on software development topics. My frogs should probably look into reading those, because, well, you know...the frogs don't really know what they're doing.

Amphibian.com comic for 25 February 2015

Monday, February 23, 2015

Talk to my Agent

Do you know which web browser you are using right now? I hope so.

Do you know which web browser the users of your site are using right now?  You should.

There are people who track global market share of different web browsers and provide that data to the public, so you can know that 58% of the world is using Internet Explorer, for example.

Maybe the most well known is Net Market Share.

But I'm talking about just your users. Are you tracking the browsers being used by the people who are visiting and using the application for which you are responsible? It is valuable data that can be used for both planning and debugging purposes.

For amphibian.com, I collect the User-Agent strings from the browsers that visit the site. The User-Agent is a bit of text that all browsers send to the servers with each request. It doesn't identify you personally, but it does tell the server the product name and version (like Chrome 40) and the layout engine name and version.

If you want to see what your browser says it is, just go to http://whatsmyuseragent.com/ and it will tell you.

What are some of these?
The problem is that these strings have gotten out of hand. They almost always start with "Mozilla." Both Chrome and Internet Explorer both do this, and they couldn't be more different. Firefox should theoretically be the only browser that could claim Mozilla heritage, but there are obviously no rules. Web developers have no right to complain though, since they caused it. When people started blocking access to web sites based on which browser was requesting the access (remember things like "This site only works in Netscape Navigator"), the browser manufacturers responded by putting "Mozilla" in every agent string.

Have a look at this site if you want to see a rather exhaustive list of possible agents: Zytrax.com Browser IDs.

Given the absurd amount of similarity and variety possible in User-Agent strings, it's best to find a tool to help you process them. Since Amphibian.com is a JavaScript-based Node application, I use UAParser.js, a very convenient parser module. It can be used both on the back-end, like I do, and in the browser if you need to parse the agent string client-side.

Usage is simple.

var parser = new UAParser();
var r = parser.setUA("some user-agent string").getResult();

console.log(r.browser.name);    // "Chrome", "Firefox", etc.
console.log(r.browser.version); // the complete version, like 40.0.2214.115
console.log(r.browser.major);   // just the version whole number, like 40
console.log(r.os.name);         // "Windows", "Linux", etc.
console.log(r.os.version);      // 95, ME, 7, 8, etc.
console.log(r.engine.name);     // "WebKit", "Gecko", etc.
console.log(r.engine.version);  // something like 537.36

After creating the UAParser object, get an object that represents the result of a parse by calling setUA and passing in a string. The result object contains a nicely organized breakdown of the agent. Easy!

Why don't you take your user agent over to my comic now and see what the frogs are up to today?

Amphibian.com comic for 23 February 2015

Friday, February 20, 2015

It Keeps Blinking, But I'm Not Turning

You know what I miss about the Internet from the 90's?

Blinking text!

You know you miss it too! You don't have to live in denial any longer. Just accept it. It's okay.

Blink.

Blink.

Blink.

Even though Netscape Navigator is a distant memory, it's easy to relive your glory days of website design by making text blink using jQuery. It's really easy.

Just take whatever elements you want to blink and add the blink class to them, like I did with this paragraph tag:

<body>

  <p class="blink">This text should blink.</p>

</body>

Then in a script tag, just do this (updated):

$(function() {

    var vis = "hidden";
    function bringBackBlink() {
        $(".blink").css("visibility", vis);
        vis = ( vis === "hidden" ? "visible" : "hidden" );
    }

    setInterval(bringBackBlink, 500);

});

Updated: After the bringBackBlink (say that 5 times fast) function is defined, setInterval makes sure it gets called every 500 milliseconds. All the function does is find all elements on the page that have the blink class and toggles their display visibility value. If they are visible, they become hidden. If hidden, they become visible. It will happen twice every second. If you want to make them blink faster or slower just change the millisecond value.

As pointed out in the comments, I originally used jQuery's .toggle() function to alternatively hide and show the element. This will only work correctly in situations where the element to be blinked is using fixed or absolute positioning (that's why it works for me in frog comics). In "normal" situations, like my paragraph tag example above, all the other elements on the page will shift around. This is because jQuery's toggle function sets the value of the display property to none, which causes the hidden element to no longer take up space on the page. Using the visibility property instead keeps things where they belong.

Now look at this example of HTML. It has lots of other non-blinking elements on the page that won't shift around when things blink.

<body>

  <p>This text should not blink.</p>

  <p class="blink">This text should blink.</p>

  <p>This text should not move.</p>

  <p class="pink">
    <span class="blink">
      This text should blink and be pink.
    </span>
  </p>

  <p>This text should not move.</p>

</body>

<style>

.pink {
    background-color: #000000;
    color: #FF3399;
    padding: 5px;
}

</style>

Here is an animated GIF showing the results:


Just like neon windbreakers and cargo pants, blinking text on the web is coming back in style! (Disclaimer: I don't actually know if neon windbreakers and cargo pants are coming back in style)

And speaking of blinking...if you still have your Christmas lights up like the frogs do, it might be time to take them down.

Amphibian.com comic for 20 February 2015

Wednesday, February 18, 2015

New Zero-Core Apple

Pair Programming is different from Pear Programming. Apples are different from oranges. Frogs are different from toads. Cores are different from food?

While writing today's comic, I was informed by my wife that apples don't really have cores. "Preposterous!" I exclaimed. Everyone knows apples have cores. No one eats the middle part of the apple. Well, I actually don't eat any part of the apple because I am allergic to apples. But normal people don't eat the middle part of the apple.

It turns out that the Internet agrees with my wife. While most people don't like eating seeds or stems, you can actually eat the entire apple. You won't even really notice anything "core-ish" about that middle part if you eat the apple sideways - start at the bottom at eat your way to the top.

But does pair programming really result in fewer core dumps? I don't know, but pear programming most certainly does.

Sorry, I don't have any deep thoughts on pair programming to share. Or any thoughts about core dumps or multi-core CPU architecture. Or really much of anything besides my shock at learning about the apple core thing. It's been a rough week. Read a comic.

Amphibian.com comic for 18 February 2015

Monday, February 16, 2015

Being Awesome

In today's comic, the frogs of today and of ancient times are using pictures instead of words to convey meaning. The problem is that sometimes the meaning is lost. My children know that the picture of a floppy disk means Save, but they have no idea what a floppy disk is.

Despite this problem, I do like to use small pictures on web pages to represent certain actions. If you'd like to do this as well, be sure to check out Font Awesome (if you haven't already). I used to create small images myself to use on web pages in <img /> tags for this purpose, but doing this with fonts and CSS makes even more sense. When the image is a scalable, vector-based character from a font, you can make it any size/color/rotation you need.

Using Font Awesome can seem like magic. You just put an empty tag in your HTML with a couple classes applied to it and it renders as a picture of something. The following example is the bomb!

<i class="fa fa-bomb"></i>

There is no actual magic involved, however. The CSS that accompanies the Font Awesome font makes use of the ::before pseudoelement to add a character of text to your markup. The character added depends on the class and maps to one of the images in the font.

Pseudoelements let you do all kinds of crazy things via CSS. Like ::before, there is also an ::after. They allow you to add text or images (or nothing) to the page, but the additions don't actually become part of the DOM. That last part makes them a little difficult to work with sometimes, but they are still useful. Here's a weird example:

a::after {
    content: " (don't click on this!)";
}

That will add a dire warning to every link on the page. I didn't say it was a useful example. I said weird.

If you'd like some useful examples, check out CSS-Tricks. But don't forget about Font Awesome if you just want to put some icons on your pages now. And also don't forget to read today's comic.

Amphibian.com comic for 16 February 2015

Friday, February 13, 2015

My Lucky Day

Today (the date of this blog entry's publication) is Friday, the 13th day of February 2015. Don't be alarmed, Friday the 13th is nothing of which to be afraid.

I've told people for years that Friday the 13th is my lucky day, which is the exact opposite of how most people in the United States see it. The reason I say that is because my wife and I got our marriage license and bought our first home on Friday the 13th. That was September 2002. Just a few years ago, in 2013, we had another Friday the 13th in September. There'll be another one in 2019.

I'm not at all superstitious and I don't often encounter someone who is. Most people I know associate Friday the 13th more with a series of horror movies than with actual bad luck. Apparently, the belief that Friday the 13th is unlucky arose in the 19th century when superstitions concerning Fridays and the number 13 were combined. Sort of like when chocolate and peanut butter were first combined, but less delicious.

I'm not sure why people didn't like Fridays. As the last day of the work week, we tend to favor them now. But the number 13 has been long-despised because it is one more than 12 - the number of completeness (or so they say). But seriously, the number 12 is everywhere. Months of the year. Hours in a day (unless you use a 24-hour clock). There were 12 tribes of Israel. Jesus had 12 disciples. Twelve is a popular number. Seven was also a popular number, but then seven ate nine and is now regarded more as a number to be feared.

So, back to 13. It is a prime number, and prime numbers are useful for public key cryptography and pseudorandom number generators. And pseudorandom number generators are used in computer games, where luck can be a factor.

However, don't confuse luck with randomness. Lots of things are random, but luck is subjective. If you believe today will be your lucky day, it will be. It's all in your perspective.

Keep in mind that the 13th day of the month actually occurs on Fridays more often than any other day of the week. Seriously, There's math to prove that. Also, there will be three Fridays the 13th this year. This is just the first one. There'll be another one next month and then again in November.

Well, enough about Friday the 13th. Don't forget that the 14th is Valentine's Day! Read today's comic to see if the frogs can help some algorithms find love.

Amphibian.com comic for 13 February 2015

Wednesday, February 11, 2015

What Does Pac-Man Smell Like?

Would This Smell Fruity?
Today I was talking to a friend about path-finding algorithms for how frogs in a game could find their way towards certain objects, and he suggested that I research "Pac-Man smell" as an example of a simpler method.

But then I began to wonder, what would Pac-Man smell like? For some reason, I theorize that he would smell like those candy hearts you get on Valentine's Day with the little messages written on them. I don't know why I think that, but I do.

That aside, I did look up the Pac-Man Smell system for path-finding. It is interesting enough that I thought I should share it. While smell is not mentioned at all in The Pac-Man Dossier, perhaps the definitive write-up on the game, Neal Ford talks about it in his book The Productive Programmer. When he discusses Anti-Objects, he uses Pac-Man as an example (see page 147 in the book).

Basically, the deal is that objects and object hierarchies, common in Object-Oriented Design, actually make some kinds of problems harder than they need to be. If you were to create a game like Pac-Man in Java, it's likely that you would have objects to represent Pac-Man, the ghosts, and the dots. So how would you make the ghosts chase Pac-Man? Obviously you'd put a lot of smarts into the ghost objects. But the algorithms needed in that case are complex and certainly beyond the computational capacity of the original Pac-Man arcade game. With today's hardware, we could make something work but, like Scrooge McDuck would always say, "Work smarter, not harder." If you turn the problem around, it can be solved in a much easier way.

In Ford's Anti-Object example, he states that the Pac-Man developers put more intelligence into the maze itself in order to make the ghosts dumber and yet still able to chase Pac-Man. When Pac-Man occupied a cell of the maze, that cell increased its "smell" value by one. When Pac-Man left a cell, its smell decreased by one and then continued decreasing by one every few clock ticks. For a ghost to track Pac-Man, it could now contain simple logic: move towards the cell with the highest Pac-Man smell.

And that is the essence of Anti-Object design. Sometimes thinking outside the object is the best and most efficient way to create the desired behavior. I would summarize this approach like this: do what works best even if it deviates from a perfect model. After all, the people playing the game won't appreciate how well the software is broken down into a perfect Object-Oriented Design. To them, it just has to work.

I'm still thinking about the best way to direct my frogs around. But the frogs are thinking about eggs, at least today.

Amphibian.com comic for 11 February 2015

Monday, February 9, 2015

It's All a Blur

How many times have you wanted to blur all or part of a web page? If you're anything like me (which you're probably not because I'm a weirdo), it happens all the time.

I tried to do this the other day and learned that there aren't a lot of good options. There is hope, however, since Chrome supports CSS Filter Effects, which include blur. It's only supported by Webkit at the moment (since Chrome 18), but we know that features such as this tend to seep into other browsers over time.

Using the Webkit filter is easy. Just apply a style like this:

-webkit-filter: blur(8px);

And just like that, your page content gets blurry.

But I know, not everyone is using Chrome. Some poor misguided individuals are still using Internet Explorer. What can be done? The good news is that in many simple cases, a jQuery plugin can provide a polyfill for the missing blur feature.

I tried out Foggy, one such plugin. If used in Chrome, -webkit-filter is applied. Otherwise, it dynamically creates a bunch of copies of the element and makes each one slightly transparent and offset to simulate the blurring.

Here it is in action on my comic. All I had to do was include the jquery.foggy.js file and then

$("#cell-1").foggy();

To get this result:

Foggy applied to a comic cell in Chrome
Looks good, right? Yeah! But that was in Chrome, my browser of choice. Let's see what happens in Firefox...

Foggy applied to a comic cell in Firefox. Whoa!
Fail! Not only did it make this cell look weird, it actually screwed up the element locations in the previous cell too. Clearly, this isn't going to work for me.

Don't discount Foggy for your own projects just yet. When applied to "normal" text and images, like in their demo page, Foggy does produce correct results in Firefox and Internet Explorer. It might work for you, depending on what you are doing with it.

But one thing is clear, and that is the fact that the future is blurry. Don't forget to read today's comic, where we continue to see what happens when frogs don't think clearly. Plus, make sure you check out our current FREE STICKERS promotion. There's a link at the top of the comic page for details!

Amphibian.com comic for 9 February 2015

Friday, February 6, 2015

Now Working for Tips

I'm trying something new this week. In the past, you may have read my posts about how I have no Bitcoins and how I lament the fact that the Internet is all advertisements.

I may have found a solution to both of those problems in ChangeTip. ChangeTip allows you to tip content creators on the Internet using Bitcoin. My comics have Facebook "Like" buttons on them, but liking a comic doesn't help me pay for the web server. With the ChangeTip "Tip" button, you can easily give me a dollar or two if you like my work.

Here's how I use it. I added the "Tip" button to the bottom of my comic page. When you click on it, a popup gives you the option to send me a tip out of your ChangeTip account or directly with Bitcoin. While you can tip in US dollar amounts, the tips are all converted to Bitcoins in my account.

There's the tip jar.

But there's more to this service than a Tip button. I haven't tried it yet, but you can also tip people via any social network just by mentioning both ChangeTip and the recipient in a post. For example, If you and I have both connected our Twitter accounts to ChangeTip, all you'd have to do in order to tip me $1 would be to tweet "hey @THECaseyLeonard, here's $1 @changetip" and I'd get a dollar's worth of bits from your ChangeTip account sent to mine. There's similar behavior on Facebook.

This seems like an interesting concept and, in my opinion, a good alternative to advertising as a way to make a few dollars as a content creator. My comic runs no ads, but my blog here does have some banners on it. In a good month, I'll earn maybe $2 from them. If people like just one or two comics per month enough to tip, I could easily replace that revenue.

The other benefit is that the money is more directly related to how people feel about the content I create. If I make $0.15 from an ad click here, it's not because someone really liked this blog post - it's because Google showed them an ad for something in which they were interested. If someone sends me a $0.15 tip, it's because they liked my blog post. The tip makes me feel better about the work I'm doing.

While I think the Tip button is a positive addition to my site, it is not perfect. It fits in nicely with the other social media buttons, but when you click on it the controls appear in a floating IFrame whereas the others typically use a separate pop-up window. I wouldn't mind that so much, except that it always expands down and right, which throws off the rest of the page - especially on mobile. Perhaps a mobile-optimized view would be in order.

I do feel a bit like the guy playing the saxophone in the subway station waiting for passers-by to toss change in a hat, but he and I have a lot in common I suppose. I guess I'll see what happens. It is a relatively new service, and my comic doesn't have a large readership, so I don't expect tips to start pouring in right away. Hopefully, though, the concept catches on and this service or one like it can usher in a new era of how the Internet is funded.

Amphibian.com comic for 6 February 2015

Wednesday, February 4, 2015

I Promise

Not This Kind of Promise
I've been working with Node for a while now. I wrote the whole application for my web comic using it as a way to learn something new and make something real at the same time. I like to be practical like that. One thing that I've noticed, as I'm sure anyone who's worked with JavaScript for more than 20 minutes has, is the tendency for large numbers of nested callbacks to create the Pyramid of Doom.

Sounds like you need to carry a whip and wear a fedora if you're going in there.

No, it's really just code like this. It grows sideways faster than it grows down.

call1(function(data) {
    call2(data, function(moreData) {
        call3(moreData, function(evenMoreData) {
            call4(evenMoreData, function(finalData) {
                // finally got what I wanted
            });
        });
    });
});

There is an alternative to this approach, called Promises. I just recently started replacing some of my callback-style code with Promises to see how it works.

A Promise, according to the Promises/A+ spec, is a representation of the eventual result of an asynchronous operation. The primary way of interacting with a promise is through its then method - where callbacks are registered to receive the eventual result of the operation or the reason why it failed.

Yeah, I had no idea what that meant the first time I read it either. I learn by experimenting. So I looked at some examples and then tried to convert one of my callback-style functions to use a Promise instead. Here's what happened.

First, I had to get a Promises library. Promises aren't supported natively in Node, so I used the Q module. It implements the Promises spec.

Here's the first function that I modified to use Promises, before I modified it.

function listImages(cb) {

    db.query('SELECT filename, type FROM comic_img', function(err, rows) {
        if (err) {
            cb(err);
        } else {
            var data = [];
            if (rows.length > 0) {
                for (var i = 0; i < rows.length; i++) {
                    data.push({
                        filename: rows[i].filename,
                        type: rows[i].type
                    });
                }
            }
            cb(null, data);
        }
    });

}

It's just one of many functions in my comic's data access module, the one that returns the list of all available images that can be used in a comic. As you can see, it implemented a callback model, where a function to be called (the callback) was supplied as the argument. The database is queried, and the resulting data is returned to the callback. As is typical for callbacks, the first argument sent to the callback function is any error that may have occurred or null if everything worked. The second argument will be the actual data which the caller was requesting.

Calling the listImages function in my web application originally looked something like this:

listImages(function (err, data) {
    if (err) {
        next(err);
    } else if (data) {
        res.setHeader('Content-Type', 'application/json');
        res.send(data);
    }
});

Now let's look at the function after I converted it to use Promises instead of callbacks.

var q = require('q');
function listImages() {

    var deferred = q.defer();

    db.query('SELECT filename, type FROM comic_img', function(err, rows) {

        if (err) {

            deferred.reject({
                message: 'database query failed',
                error: err
            });

        } else {

            var data = [];
            if (rows.length > 0) {
                for (var i = 0; i < rows.length; i++) {
                    data.push({
                        filename: rows[i].filename,
                        type: rows[i].type
                    });
                }
            }

            deferred.resolve(data);

        }
    });

    return deferred.promise;

}

The differences are not that extreme. On line 1, I have to require the Q module of course. Internally the function still makes the call to the database the same way, but it doesn't require a callback function to be passed in as an argument. Instead it creates a deferred object there on line 4 and returns the deferred.promise at the end. That's different - the original version didn't return anything. If the database query fails, I call deferred.reject and pass in an object describing the error (line 10). If everything works, I call deferred.resolve and pass in the data. But the database call happens asynchronously - whoever called this function got the promise returned to them and has to use that for handling the eventuality of data or failure. So here's how the caller changed:

listImages().then(function(data) {
    res.setHeader('Content-Type', 'application/json');
    res.send(data);
}, function(error) {
    next(error.error);
});

Calling listImages now returns a Promise, and the primary way of interacting with Promises is through the then method. The first parameter to the then function is a function to be called when the Promise is resolved, taking the data that was produced (what I passed to the deferred.resolve function above). The second parameter is a function to be called if the Promise is rejected and taking an object representing the reason for rejection (the error description object I passed to deferred.reject).

So there is my simple example of using Promises instead of the callback pattern to deal with asynchronous operations. There is a lot more that the Promises specification offers and much more you can do with the Q library. I'm going to keep working with it see what happens. I might post more later. In the mean time, you can read this comic about frogs in the Silicon (Dioxide) Valley.

Amphibian.com comic for 4 February 2015

Monday, February 2, 2015

Can We Get To The Bottom Of This?

If you followed the link here from today's comic, you may have just experienced an infinite scroll. If not, I'm sure you've experienced it on Twitter, Facebook, or Pinterest. No matter how far down on the page you go, more stuff keeps getting added to the bottom. You can never reach the end!

Obviously, the comic today invokes the stuck-in-a-loop element from the Bill Murray film Groundhog Day as well as the tendency for web pages to scroll forever these days. The "invention" of the infinite scroll web page was a response to the rise in mobile web access. It is more natural to keep moving your thumb to access more information than it is to touch "previous" or "next" buttons.

If you want to add the infinite scroll element to your own web application, be careful. As a design paradigm, it doesn't work for all situations. Consider what type of information is to be displayed. Will all data be of equal relevance, or will the most important be near the top? Is the data on a timeline? Facebook and Twitter show you the newest items at the top (more or less) and the more you scroll, the farther back in time you go. They make some exceptions to that in the case of conversations because most people want to read the beginning before the end. Also, consider how navigation is affected. In my comic, the links to this blog, the social media buttons, and the previous/next navigation links are at the bottom of the page. I had to make special accommodations for today's comic in the form of a fixed-position panel at the bottom than can be shown and hidden. You could have a similar issue if your navigation is at the top and your user has scrolled 4800 feet down on the page. One final issue is that if your dynamically-added content waaaaay down on the page is a link to something else, the user might be disappointed when the use the back button and don't really go back to the same place they left.

If you're okay with all those issues and want to make your own page scroll forever, it is very easy to do with jQuery. Here is an example of what I did for my comic.

$(function() {

    var addPoint = 300;

    $(window).scroll(function(){

        if ($(this).scrollTop() > addPoint) {

            // get some new data
            // add it to your page

            addPoint += 800;

        } 

    });
 
});

My code sets up a function that will listen for scroll events on the window. The value returned from scrollTop() will be the number of pixels hidden from view due to the page scrolling down. Initially, I want to add more data when the user scrolls past 300 pixels. But as I add more data, the point at which I want to add more data will increase as well. Typically, the data you want to add will come from your server via an asynchronous call - so make sure you start the process before the user gets the whole way to to bottom to ensure you get it added in time.

If you want to address the navigation issue as well, I like to sometimes use the Affix tool that comes with Twitter Bootstrap. I make use of it in my comic editor even - though it doesn't scroll forever, there are some things I like to keep visible on the page at all times, no matter how far down I have to go. It's extremely simple to set up on your Bootstrap-enabled page.

<div data-spy="affix" data-offset-top="50">
    <div>
        <p>You can put navigation or whatever here.</p>
    </div>
</div>

But you also need some of your own styles to make it work. Here is some CSS to go along with the HTML code above.

.affix {
    top: 8px;
    left: 8px;
}

.affix-top {
    position: fixed;
    top: 110px;
    left: 8px;
}

As soon as the page loads, the Bootstrap JavaScript adds the affix-top class to your element. What that means is totally up to you - in my example I specify the affix-top class to mean the element has a fixed position 110 pixels from the top of the page and 8 pixels from the left. When the user scrolls down farther than the value given in the data-offset-top attribute, 50 pixels in my example, the affix-top class is removed and replaced with the affix class. Again, what this means is up to you. Bootstrap specifies "position: fixed" but nothing more. In my CSS, I specify the position as 8 pixels from the top and 8 from the left. That will keep it in view no matter how far the page is scrolled. When you scroll back up to the top, Bootstrap reverses the process and puts the affix-top class back on in place of the affix class.

If you're in the mood for a scroll that's a little more finite, there's always One Mile Scroll. Which is, you know, one mile. Slightly shorter than infinity.

Amphibian.com comic for 2 February 2015