Friendly Neighborhood Reminder To Optimize Your Images

My daughter is a big fan of Daniel Tiger’s Neighborhood and today I was curious who the voice actor was. I found the answer on Angela’s Clues (an 11-year-old named Jake Beale). I also found some pages loading really slowly.

I clicked my Google Pagespeed bookmarklet on the projects page and it got the lowest score I’ve ever seen.

12 out of 100 on Google Pagespeed

The #1 suggestion was to optimize images, claiming the images could be reduced by a stunning 3.8 MB. Sure enough, I tried optimizing one of the files. It started at 765 KB and after PNGGauntlet finished, the image was a mere 106 KB, 14% of its former size.

And there’s even better news if you use WordPress. Instead of optimizing images manually, install the EWWW Image Optimizer and it will not only optimize images you upload, but it can also optimize all of the images you already have on your site.

Speedy delivery isn’t just for the mail. Make Mr. McFeely proud.





Go vs Node vs PHP vs HHVM and WordPress Benchmarks

I have been impressed with the performance I’m seeing with Vultr VPSes, so I decided to do an experiment to see what the maximum performance could be.

I created a simple Hello world program in Go, Node.js and PHP, then tested them with ApacheBench 2.3.

Here are the three programs I used.

Go 1.4

package main

import (
    "fmt"
    "net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprint(w, "Hello world from Go")
}

func main() {
    http.HandleFunc("/", handler)
    http.ListenAndServe(":4000", nil)
}

Node 0.10.33

var http = require('http');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World from node');
}).listen(3000, '127.0.0.1');

PHP 5.6.6 with Opcache enabled

<?php
echo "Hello world from PHP";

These benchmarks were run on a Vultr server with 768MB of RAM and a single 3.4MHz CPU with nginx 1.6.2. To perform the benchmarks I ran the following ab command three times to warm up, then I ran three more runs and averaged the second group of three.

ab -q -n 5000 -c 25 localhost/

WordPress had WP Super Cache enabled. Without it WordPress was getting around 30 requests/second.

Without further ado, here are the results. Higher is better.

Benchmark Results

Here’s the data in tabular form

Test TypeRequests per second
Nginx17,791
Go4,542
Node.js3,614
PHP1,773
HHVM3,090
WordPress (PHP)854
WordPress (HHVM)1,259

Go is the clear winner, but I was surprised to see how close HHVM was to Node. I was also impressed that with HHVM, WordPress was approaching a simple PHP script. Of course that was with the caching plugin in place.

I planned to include the results of nginx serving a static file in the chart, but it made the other results hard to distinguish at a whopping 17,791 requests/second.

Lastly, I was concerned to find that sometimes the HHVM results went into the toilet. Most of the other benchmarks were fairly stable, but HHVM would average over 3,000 on the first three runs, then drop off on the next three. In one case it hit around 700, so something was clearly wrong, but I’m not sure what. I had already fixed the nf_conntrack_max issue, but it could be something else along the same lines.

My takeaway is it’s a great time to be a web developer. Getting WordPress to hit over a thousand requests a second on a $5/month server is impressive. And it’s only getting faster!





Wildcard Site for Laravel Homestead

Laravel is a fantastic PHP framework. Homestead makes it even easier to do Laravel development. It’s basically a prebuilt Vagrant VM so you can build Laravel projects without having to worry about setting up the server environment first.

If you’re not familiar with Vagrant, it’s another extremely useful tool to create easily reproducible development environments. You create a Vagrantfile, which serves as a recipe for a virtual machine. You can give that single text file to another developer or copy it to your laptop if you’re traveling, then run vagrant up and you will get an identical environment.

Now back to Homestead.

A minor nuisance with Homestead is that you need to configure a new site configuration for each project you’re developing. You can do this in one of two ways. First, you can add them to the Homestead.yaml. Second, you can run the serve script in a running Homestead VM. In either case, you also have to add an entry for the new site in your hosts file.

Before I started using Homestead, I used a vagrant box that I had configured to use regular expressions in nginx. This allowed me to have a single nginx configuration for all my projects. It was really convenient, but I thought it wouldn’t work with Homestead. I was wrong.

Let’s say you use the format of project_name.app for your Laravel projects. For example, if your project was named waldo, you would browse to waldo.app (after having updated your hosts file) to see the site. Normally, you’d need to have something like this in your Homestead.yaml:

folders:
    - map: (path to your Laravel projects)
      to: /home/vagrant/Code

sites:
    - map: waldo.app
      to: /home/vagrant/Code/waldo/public

    - map: franklin.app
      to: /home/vagrant/Code/franklin/public

    - map: henry.app
      to: /home/vagrant/Code/henry/public

    - map: phil.app
      to: /home/vagrant/Code/phil/public

And so on, for each project. Instead, you can use this:

folders:
    - map: (path to your Laravel projects)
      to: /home/vagrant/Code

sites:
    - map: '~^(?<project>.+)\.app$'
      to: /home/vagrant/Code/\$project/public

And it will work for any number of sites as long as you use the same project_name.app format.

The file that’s created in /etc/nginx/sites-available won’t look pretty, but it removes a step from your development process. And if you’re anything like me, efficiency is bliss.





Windows 7 Showing the Wrong Thumbnail

I have a few avatar pictures that I use for web sites I join. We had some new family pictures taken for Christmas, and I cropped and resized some of them to make new avatar pictures. But whenever I used them, Windows Explorer continued to show the entire image as the thumbnail. Windows caches thumbnail images for performance reasons, so it doesn’t have to regenerate them every time you visit a folder of pictures. I assumed the cache was out of date.

One deceptively simple solution was to open Windows Explorer to the folder containing the images and change the view. You can do this by right-clicking inside the folder and selecting View, then Large or Medium icons (whichever one isn’t already selected). This worked, but when I switched back to the previous view, the outdated thumbnail remained.

So I took a more severe approach. According to this Microsoft Answers post, the thumbnail cache is stored in %LocalAppData%\Microsoft\Windows\Explorer. To clear it out, I deleted all of the thumbcache_*.db files.

But this didn’t solve the problem either. When the thumbnail cache was regenerated, it still had the older image! If I had kept reading the thread I linked to above, I would have realized this sooner. JPEG files can store a thumbnail in the EXIF data. And the thumbnail wasn’t updated after I cropped and resized the image. I confirmed this using MiTeC’s Photo View application.

Don’t forget about this embedded thumbnail if you ever post a cropped photo that has anything sensitive or private. In 2003, Cat Schwartz made this mistake, posting a headshot of herself that happened to be cropped from a photo of her sans clothes.

But back to the problem at hand. The final solution was to use Stripper (great name) to remove the EXIF data. Just so you’re aware, when you drag an image onto the Stripper window, it will overwrite the original image with the stripped version without any confirmation. If you want to keep the original, make a copy first.

And the way to stop this from happening in the future is to use an image editor that updates or removes the EXIF thumbnail when you crop the image. I emailed FastStone Image Viewer support to see if there’s a way to do this that I’m not seeing, and if not, to fix it.





nf_conntrack: table full, dropping packet

I was benchmarking nginx on a Vultr server and the test ran great, over 17K requests per second, but after a little over 20K requests, nginx would just halt. Memory and CPU usage were nowhere near the limit, and eventually Apache Bench got a timeout.

This error occurred over and over again in /var/log/kern.log as soon as the slowdown hit:

nf_conntrack: table full, dropping packet

Thanks to this security StackExchange post I ran sysctl net.netfilter.nf_conntrack_max and found that nf_conntrack_max was set to 23788. I checked on my Linode server and it was 65536. To get the Vultr server to use a higher limit, I used this command:

sysctl -w net.netfilter.nf_conntrack_max=65536 and now the benchmarks run as smooth as butter.

I hope this saves you some time if you happen to run into the same situation.





Number of readers
- home
news links
other links

about me
blog archives
docs
entertainment
experiences
funny lists
humor
intellectual
interests
music
opinions
photos
web designs
tools
software
webmaster help

Become a Patron


Most Popular Pages


Sign up
Enter your email address to be notified of new blog posts



Search blog archives

Calendar
April 2015
SMTWTFS
« Feb  
 1234
567891011
12131415161718
19202122232425
2627282930 
Recent comments
Colin: When I create a presentation with animations in PPT 2010, and save as ppsx on a flash drive. Will the effects...

Peter: thanks, that’s brilliant!

Don: is there a game that can be won with all the cards being stacked before the first ace moves to the top

Robin Barr: This is probably the longest thread in history – 10 yrs! I believe I can help most of you if your...

Jimmy Swindell: After wasting at least an hour on 11982, my online search for a solution lead me here. Thanks for...








Random quote
The best substitute for experience is being sixteen.

(See all the quotes)



Say hi on








(?) Choose theme:  X X X X X X
Page loaded in 0.0384 seconds