Increasing line height in your terminal and IDE

Just a small tip today. During my “What Designs Need To Know About Visual Design” conference presentations I discuss the importance of line height and how increasing it can improve readability on websites and applications. The same technique applies to your development environment and IDE. By increasing the vertical spacing you can make it much easier to read and as a result shouldn’t require as much effort meaning you’re slightly less drained at the end of the day.

Within iTerm you can set the line height in the text preferences.
Screenshot 2015-01-23 15.29.18

Enabling infinite scrollback in iTerm

 

The default profile within iTerm limits how many lines of output it caches and allows you to scroll back.  When debugging a large amount or a long running terminal session this can become frustrating.

To enable unlimited scrollback simply go into the preferences, on the terminal tab you’ll find the “Unlimited scrollback” option. Tick and you’ll be able to see everything and not just the last 10000 lines in future.

 

iTerm Scrollback lines

Tunnelling to a Docker Container using ngrok

Ngrok offers the ability to “I want to securely expose a local web server to the internet and capture all traffic for detailed inspection and replay.”

While playing with RStudio, a R IDE available inside a browser, what I actually wanted was ngrok to “securely expose a local web server running inside a container to the internet”

Turns out it is very easy. Let’s assume we have RStudio running via b2d on 8787.

To proxy to a port on our local machine we’d use:

$ ngrok 8787

Sadly this will fail as our b2d container is not running on 127.0.0.1
The way around it is to define the boot2docker hostname/IP address

$ ngrok b2d:8787

You’ll get the output:

Forwarding http://47df0f.ngrok.com -> b2d:8787

All HTTP requests to the domain will now be forwarded to your container. Very nice!

For those wondering why I have a b2d hostname, I added it to my hosts file because typing is sometimes the bottleneck.

$ cat /private/etc/hosts
192.168.59.103 b2d

Cache Invalidation with Clouldflare, WordPress, Varnish and HTTP PURGE

cache-invalidation

While having a cache can help WordPress scale you encounter one of the hardest computer science problems of cache invalidation. When a new post is published then the homepage cache needs to be broken in order to refresh.

When using Varnish there is a really nice wordpress plugin called Varnish Http Purge. Under the covers when a new post or comment is published it issues a HTTP PURGE request to break the cache.

Unfortunately if you have cloudflare in front of your domain then it will attempt to process the PURGE request and fail with a 403. After all you don’t want the entire world being able to break your cache.

$ curl -XPURGE http://blog.benhall.me.uk
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>cloudflare-nginx</center>
</body>
</html>

My solution was to add a /etc/hosts entry for the domain on my local machine to point to the local IP address. When a HTTP request is issue to the domain from my web server then it skips cloudflare and goes straight to the Varnish instance, allowing the cache to be broken and solving the problem.

Making users feel special with an invite (or the Fabric invite email)

As a user, when signing up to a preview of a product you’ll likely receive a very generic thank you message, a mailchimp confirmation or nothing at all. When a company does something different it stands out and users generally notice.

To use crashlytics I needed to join the Fabric developer programme, a cross-platform mobile development suite from Twitter that includes a number of modules and tools to help with the application development lifecycle. Crashlytics is designed around crash reporting and alerts.

After joining the programme I received the standard email saying I’m on the list. Nothing to see here.

Twitter Fabric Invite

After 9 minutes a second email arrived. Enough time had passed that it could be personal and not automated, unlikely but I still like to believe.

Twitter Fabric Invite Email

A couple of items instantly stood out from the email.

1) Firstly the subject “Fabric access (need reply)”. 10 minutes ago I was told I was on the list, now I receive an email about my access but required a reply. It sparked my interest enough to open it.

2) The opening paragraph states the founder “pulled aside one of the devs to create a batch of one just for you.” – Instantly giving the user special treatment and making them feel important. I don’t believe this happened but there is still a positive feeling attached to the statement and the company as a whole. It’s a nice touch.

3) “Check your inbox shortly for the invite” – This keeps me engaged and the product at the front of my mind. It also starts to build the anticipation that I might be joining something special.

4) “Let me know once you receive the invite!” – A great way to engage with users and start the conversation. It doesn’t ask about first experiences or only get in touch if you need something both of which cause the user to think. It would be really interesting to see if this sparks conversations and what questions also are attached with the initial email.

A few moments later an invite code arrived and I signed up instantly. Sadly, I didn’t let Wayne know, sorry Wayne.

Scaling WordPress with Varnish and Docker

In my previous post I discussed how my blog is hosted. While it’s a great configuration, it is running on a small instance and the WordPress cache plugins only offer limited value. Andrew Martin showed me his blitz.io stats and it put mine to shame. Adding Varnish, an HTTP accelerator designed for content-heavy dynamic web sites to the stack was agreed.

My aim was to have a varnish instance running in-between Nginx container that does the routing for all incoming requests to the server and my WordPress container. With a carefully crafted Varnish configuration file I use the following to bring up the container:

docker run -d --name blog_benhall_varnish-2 
   --link blog_benhall-2:wordpress 
   -e VIRTUAL_HOST=blog.benhall.me.uk 
   -e VARNISH_BACKEND_PORT=80 
   -e VARNISH_BACKEND_HOST=wordpress 
   benhall/docker-varnish

The VIRTUAL_HOST environment variable is used for Nginx Proxy. The Docker link allowing Varnish and WordPress to communicate, my wordpress container is called blog_benhall-2. VARNISH_BACKEND_PORT defines the port WordPress runs on inside the container. VARNISH_BACKEND_HOST defines the internal hostname which we set while creating the docker link between containers.

When a request comes into the Varnish container it is either returned instantly or proxied to a different container and cached on the way back out.

Thanks to Nginx Proxy I didn’t have to change any configuration, as they simply reconfigured themselves as new containers were introduced. The setup really is a thing of beauty, that can now scale. I can use the same docker-varnish image to cache other containers in the future.

The Dockerfile and configuration can be found on Github.

The Docker image has been uploaded to my hub.

Making Cron jobs easier to configure with Special Words

Cron jobs are a very useful tool for scheduling commands however I find the Crontab (Cron Table) syntax nearly impossible to remember unless I’m working with it daily.

Most of my Cron jobs are fairly standard, for example backup a particular directory every hour. While configuring a new job I looked around to remember how to execute a command at a particular time every day. Most of the online editors I tried are more complex than the syntax itself. Thankfully I came across an interesting post from 2007 that mentioned Special Words. It turns out that you can use a set of keywords as well as numbers when defining a job:

@reboot Run once at startup
@yearly Run once a year
@monthly Run once a month
@weekly Run once a week
@daily Run once a day
@hourly Run once an hour

To run a command daily I can simply use:

@daily <some command>

But when is @daily? Using the run-parts command we can find out when each keyword will be executed, in this case 6.25am. A strange time but works for me!

$ grep run-parts /etc/crontab
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

Introducing Ocelite – A schemaless embeddable database

Given the frustration aired in my previous post I decided to do something about it. The tl;dr version of the post is that I just want to be able to store data for future use without the overhead of additional services, servers, schemas, versioning etc.

My solution is Ocelite. Ocelite is built on top of sqlite3 and provides an easy way to store and retrieve data for Node.js applications but the pattern works across languages.

Here is a snippet for saving and retrieving data:

var Ocelite = require('ocelite');
var db = new Ocelite();
db.init('data.db', ['user'], cb);
db.save('user', {name: 'Barbara Fusinska', twitter: 'basiafusinska'}, ['twitter'], cb);
db.get('user', 'twitter', 'basiafusinska', function(err, arr) { console.log(arr); cb(); });
db.all('user', function(err, arr) { console.log(arr); cb(); });

To start we initialise the database and define categories for the data we want to store, in this case users, using `db.init`. In future if we want to store additional categories then we can extend the array, when the application is reload it’s automatically taken into account without requiring any migration scripts.

To save data via `db.save` we state the category of data along with the block of data being stored. The third parameter is optional and allows you to define properties of the data block we want to index.

If we index our object we can use the `db.get` function to return them in future via the related lookup value, for example a twitter handle. If we want to return all our users then we can use the `db.all` function.

You can install it as an NPM package.

$ npm install ocelite

That’s it. Nothing else. No SQL insert statements, no migration scripts, just saving data. The source code is available at http://github.com/OcelotUproar/ocelite

One final thing, why call it Ocelite? My company is called Ocelot Uproar, I like Ocelots, it’s a nod towards Sqlite and naming is hard.

The Yak has been shaved.

 

It’s 2015, please just let me store data

Looking back at 2014 I worked with CouchDb, MongoDb, LevelDb, Cassandra, ElasticSearch, Redis, Neo4j, Postgresql and MySQL to manage data. Faced with a new prototype I reached the point where I needed to save data. I don’t need it to scale yet, I don’t need it to have map/reduce and storage for billion of records, I don’t even need it to be quick. I just want to store data and in future be able to easily have the data returned.

Turns out my choices are limited to be point of flat files looking like the best option. Before I went down that path I tried one more approach, Sqlite3. This post will investigate how sane Sqlite3 would be given it’s stable and embeddable.

Firstly we need to create the database schema, the solution is already becoming time consuming and boring. The script I created when the application loads is as follows:

var path = require("path");
var fs = require("fs");
var file = path.join(__dirname, "data.db");
var sqlite3 = require("sqlite3").verbose();

function create(cb) {
  var db = new sqlite3.Database(file);

  console.log("Creating db...");
  db.serialize(function() {
    db.run("CREATE TABLE user (id integer primary key, fb_id TEXT, name TEXT, email TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)");
    console.log("Created db");

    cb();
  });
};

function init(cb) {
  fs.exists(file, function(exist) {
    if(exist) {
      return cb();
    } else {
      create(cb);
    }
  });
};

module.exports = init;

If the schema changes then we’ll need to write an additional script, a problem we can worry about for another day.

Once we’ve created the DB then inserting data becomes straight forward apart from the fact that we might not know the data in advance meaning migration scripts are likely to happen sooner rather than later.

db.run("INSERT INTO user (fb_id, name, email) VALUES (?,?,?)", [fb_id, name, email], function(err) {
  res.status = 201;
 res.end();
});

One nice added bonus is the sqlite3 command line tool.

$ sqlite3 db/data.db
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select * from user;

White Sqlite3 works nicely to store data, having to manage a schema is an overhead and additional problems I don’t want to deal with. It’s 2015, why can’t I just store some data?

fault-tolerance_NoSQL

Boot2Docker runs out of disk space

After a couple of months of using Boot2Docker you can quickly produce a large number of images and containers.

$ docker images | wc -l
76

$ docker ps -a | wc -l
194

Each of these will be taking up valuable space on your drive. By default, boot2docker is only allocated a 18.2G disk so eventually when you attempt to build or pull new images it will fail due to running out of space.

The df command can be used after ssh’ing into the boot2docker VM to identify how much you have left. Boot2docker uses /mnt/sda1 for storing images and containers.

$ boot2docker ssh
$ df -h
Filesystem Size Used Available Use% Mounted on
rootfs 1.8G 203.5M 1.6G 11% /
tmpfs 1.8G 203.5M 1.6G 11% /
tmpfs 1004.2M 0 1004.2M 0% /dev/shm
/dev/sda1 18.2G 18.2G 0K 0% /mnt/sda1
cgroup 1004.2M 0 1004.2M 0% /sys/fs/cgroup
/dev/sda1 18.2G 18.2G 0K 0% /mnt/sda1/var/lib/docker/aufs

If you’ve ran out of space, one fix is to increase the size of the volume as described at https://docs.docker.com/articles/b2d_volume_resize/

The other, and potentially more sensible approach, is to perform some house keeping.

Firstly, to remove any exited containers you can use the command. Not this will remove any data inside the container unless it has been mounted as a separate volume.
$ docker ps -a -q | xargs -n 1 -I {} docker rm {}

The most space can be recovered by removing images, especially untagged images. Untagged images occurs when an image has been built but is only referred to via the latest tag. When a future image is built with the same name then the previous image is untagged as it’s no longer the latest version. If it hasn’t been tagged with another name then it will become untagged. Thanks to Mike Hadlow for the shell script to clean them up.
$ docker rmi $( docker images | grep '<none>' | tr -s ' ' | cut -d ' ' -f 3)

Another problem, as I’ve discussed in a previous blog post, is you might have downloaded more image tags than you expected via fig or docker pull. For example I accentially had 19 versions of redis on my local machine when I only needed one.

$ docker images | grep redis | wc -l
19

These are easily cleaned up by replacing <none> with the image names you want to remove.
$ docker rmi $( docker images | grep 'redis' | tr -s ' ' | cut -d ' ' -f 3)

Alternatively, if this is just too much hard work then simply burn it all and start again.
$ boot2docker destroy
$ boot2docker init