git+cron+[text editor] should be your note taking solution

I am a big fan of:

  • git
  • simple note taking for keeping track of details that don't fit into my short-term memory
  • ultra-simple TODO lists

After several years of writing custom apps to do this (too much work), trying Wunderlist (which doesn't have good, programmer-quality sub-tasks in their free version, in my opinion), and working with paper notes (more difficult to re-order notes than in a text editor), I've found a solution that seems to be working great for me.

It allows the benefits that all good note taking solutions need:

  • Keep yourself sane and focused on the right priorities
  • Don't forget little details
  • Easy to re-organize and share
  • Absolute minimal (or ultra-reliable) technology
  • If technology is involved, then those notes need to be reliably backed up without relying on some 3rd party cloud service that could be gone tomorrow or have an outage at a critical time when I really need my notes

Using technology for notes was actually a hard decision

Let me say right up front that I rarely draw diagrams or use images in my notes, and when I do they're best stored on paper or a whiteboard. I'm a programmer, and using something like EverNote (while powerful, and ubiquitous - I gave it a real try several times over 2-3 years) for the 99% of my notes that are just plain text, in order that I could also have image support wasn't worth it. Jumping into EverNote was more of a pain than jumping to my ever-present friend, my text editor. What I really wanted was simplicity, no reliance on 3rd parties required, and a timestamp on notes if I really wanted to know when they were written (this is seldom important, but kind of nice).

On the other hand, I also wanted a solution that would capture all types of my notes, so that's why I tried (and re-tried) EverNote so many times.

I was also initially dubious of using technology to solve my problem, since it's just not as simple as paper notes. There is a lot to be said for a simple notepad and a pen or pencil. In fact, for the last several years I've been using this artist sketchbook paper to keep all my temporary notes; it has no lines, is easy to recycle, and features tear-out pages.

However, I largely moved away from using paper when I started managing a ton of little details on projects, and diving into particularly tricky algorithms over a period of days or weeks. I found myself thinking of things I needed to remember to review, write tests for, or ensure were still working before deploying my code; and I needed not to forget any of those little details days or weeks after the initial insight. So, while capturing these details on paper worked pretty well for a long time, when I wound up with 100 details that then needed to be organized, re-organized, and priorities into a TODO list when I got back to my desk, paper just wasn't ideal.

The three simple, small pieces of my workflow

DISCLAIMER: If you're not already using git or another SCM like mercurial or something modern, then just stop here. This post is written by a programmer, for programmers,

To solve my note taking dilemma, and in conjunction with my fairly recent move to full-time Linux and vim usage, I started keeping all my notes in text files, automatically backed up by git to my own git server. Of course, if you don't want to use or host your own git server, then GitHub will work fine (except that they're a 3rd party you must rely on - it's a tradeoff).

1. a simple, combined format for notes and TODOs

I use the following format for nearly all notes:

Here's a free-form note that isn't a TODO. It might,
for example, describe the purpose of this list, or
describe some design philosophy or something.

Pending
=======
- Something to be done
/ Something complex, that is partially completed
  - With multiple sub-steps
  - Some of which still need to be done
  | And some of which are complete

Done
====
| Writing this blog post, for example, is done.
  You can continue multiline notes with simple
  indentation rules that are similar to how you
  already write code. Lines that are indented
  and begin with "-", "/", or "|" are sub-items,
  while lines that are just more text are a
  continuation of the previous line

The following, simple conventions are followed:

  • Use "-" for something that needs to be done
  • Use "/" or "\" for something that's partially done (and/or has sub-steps that aren't yet done)
  • Use "|" for something that you believe to be done
  • Free-floating notes are just written out as needed and not prepended with any symbols.
  • Split "pending" and "done" tasks into separate sections. Technically these labels are redundant since the symbols already indicate the state of an item. However, I found that visually splitting up the list helped me to know how much progress I was making as the "Done" section got longer.

These five simple rules make for an as-detailed-as-you-need-it-to-be list that doesn't involve all the overhead of ticket or task tracking systems, it's sharable with your colleagues if that's necessary via your SCM tool (though I find it really useful for my own notes that constitute tiny implementation steps that aren't necessary to share with anyone), and it's extremely quick to see what's done and what isn't at a glance.

A real-world example may look something like this:

Dependency resolution project for new package
manager.

Pending
=======
/ Dependency resolution code
  | Handle simple dependencies
  - Handle circular dependencies
- Finish test suite
  - Write unit tests
  - Write speed tests
- Try it on a simple package install
  - Did it work?
  - Were there errors?
  - Did anything unexpected happen?
- Try it on a complex package install
  - Did it work?
  - Were there errors?
  - Did anything unexpected happen?

Done
====
| Flesh out design with coding partner
| Estimate tasks and update ticket tracker
| Send updated estimate to client

2. a simple script to commit and push changes

Here's the script that I use to auto-commit and push my notes changes. It adds a simple commit message that indicates that the script made the commit, as opposed you making a commit by hand, which you can still do. This script should work unaltered on OS X and Linux:

#!/bin/bash
# NOTE: This auto-commit script must be executable

cd /path/to/my/notes/repo/

git add .
git commit -m "Automatic commit: $(date)"

# Nothing is committed/pushed if nothing has changed.
# You can detect this via the exit code from the
# 'git commit ...' command.
#
# Also, since this is a simple note taking repo you
# really don't need anything beyond the 'master' branch.
if [[ $? -eq 0 ]]
then
  git push origin master
fi

3. cron job to periodically run the script

# Auto commit and push every 10 minutes
*/10 * * * * /path/to/my/notes/repo/auto-commit

Gotchas

The are only four, small gotchas that I've run into after using this method for about 6 months, all of which are minor concerns for me.

  1. The git server you're pushing to must be accessible at least once in a while (once a day at worst?) for your commits to get backed up, but who isn't online most of the time these days?
  2. The commit script won't remove files that have been deleted or moved/renamed. I consider this perfectly fine, since renaming and removal of files from my notes repo is something I periodically do by hand, and it's not a big deal since I don't want the auto-commit script to be deleting files from the repo without my personal intervention.
  3. If the cron job doesn't run you might not know that it failed. This isn't a huge problem for me since I'm in my notes and I remember to check the git log every few days, so I know it's committing and pushing, but if you're the forgetful type this might catch you in a bind. Also, cron tends to be hyper-reliable so long as you have a properly written cron job, so I don't worry about cron's reliability in this respect.
  4. If you're sharing your notes repo with colleagues (I don't, typically - it's for my personal use), then the commit script won't handle merges and such. I suppose you could add the following line to the script (before the commit line) to address that, however:

    git pull origin master

    Just don't branch on your notes repo and life will remain simple and beautiful.

Vim: Spacebar as leader key, CapsLock as Esc

I have many times seen people remap their CapsLock key to either Esc or Ctrl.

I prefer to use my leader key (if you don't know what a leader keys is, check this out) as a substitute for Ctrl-based commands. Perhaps I have spent so much time on Mac that pressing the Ctrl key for things feels unnatural vs. the Command key; in the last year my favorite environment has become Linux, as close to full-time as possible, so I may be slowly losing this preference. Either way that little stretch my pinky has to do to reach for the Ctrl key doesn't feel perfectly natural to me and (again, just personal preference) repmapping CapsLock to Ctrl didn't solve the issue of reaching for Esc. I've also found a shocking number of keyboards with Esc keys so small they're essentially useless at high speed when reliability is critical.

When I discovered the leader key, that was my solution.

Now, I have my CapsLock key remapped to Esc, and instead of using Ctrl all the time my spacebar is my leader key in normal mode. I love it because:

  • It requires no finger stretching
  • It can be easily hit by either hand at any time
  • It’s as effective as Ctrl because rolling <Leader><other key> is just as quick for me as pressing <C-other key>, and because I can do the space with either hand it doesn’t matter which other key I press. Most keys feel equally accessible so long as they’re near the home row.
  • By having CapsLock mapped to Esc I'm a quick flick of the finger from my entire leader key library. Also, the Leader key has the advantage, I think, of not needing to press and hold a modifier key while triggering a shortcut.
  • I've effectively lost the need for nearly all Ctrl-based commands, except in rare occasions.

This final bullet makes it much easier, for me at least, to use my entire keyboard as a leader key based set of custom shortcuts, which I think was the intention of the leader key in the first place, if I understand it correclty.

Below is my leader key-based .vimrc with my own shortcuts for extremely common tasks like moving and resizing buffer windows, opening split views, and saving buffers to disk. For me it replaces not only some Ctrl-based commands, but also some of the most common ":" commands.

let mapleader=" "
nnoremap <Leader>q :q<CR>
nnoremap <Leader>w :w<CR>
nnoremap <Leader>e :e
nnoremap <Leader>v :vsplit
nnoremap <Leader>s :split
nnoremap <Leader>p :CtrlP<CR>
nnoremap <Leader>g :vimgrep
nnoremap <Leader>c :copen<CR>
nnoremap <Leader>C :cclose<CR>
nnoremap <Leader>8 :set tw=80<CR>
nnoremap <Leader>0 :set tw=0<CR>
nnoremap <Leader>n :set invnumber<CR>
nnoremap <Leader><TAB> <C-w><C-w>
nnoremap <Leader>h <C-w>h
nnoremap <Leader>j <C-w>j
nnoremap <Leader>k <C-w>k
nnoremap <Leader>l <C-w>l

nnoremap <Leader>H <C-w>H
nnoremap <Leader>J <C-w>J
nnoremap <Leader>K <C-w>K
nnoremap <Leader>L <C-w>L

nnoremap <Leader>, 2<C-w><
nnoremap <Leader>. 2<C-w>>
nnoremap <Leader>- 2<C-w>-
nnoremap <Leader>= 2<C-w>+

Questions to ask before jumping on the DevOps bandwagon

So, you've decided to go the DevOps route...hm...really?

Look, programmers aren't sysadmins just like sysadmins aren't programmers. Programmers need to stop acting like they know how to do everything.

Q: Does your company manage more than 10 servers?

No? Do you really need a DevOps tool? Did someone on the development team sell you on this idea because they think the tool is cool? If so, did the person who sold you on it ever work as a sysadmin professionally where their reputation was actually on the line if something went wrong? If not, why not?

Did someone on the development team claim, "We don't need sysadmins, because this tool does it for us?"

Maybe you don't need a full-time person, but you almost certainly need someone who's actually done the work, even if it's just to design and monitor the system periodically. If you don't have those resources then you need to assign someone on your team to start learning this stuff so they can become an expert. Substituting real expertise, gained from real experience, with a tool that claims that sysadmin's jobs can simply be replaced may work for a while, but don't kid youself into thinking that it's safe.

If you're a startup I realize that you may not have the experience, money, or other resources to have a full-time sysadmin, and I understand. But seriously, you need someone who understands systems doing the management of your systems. Handing this responsibility over to a community tool is irresponsible. You've essentially hired your next door neighbor, who has no specific expertise, to run you critical business infrastructure.

Q: Would anyone on your DevOps team know how to build these servers by hand if the DevOps tool didn't exist?

If not, then STOP RIGHT WHERE YOU ARE and get someone who has some real hands-on experience. At every level of your company, when something goes wrong, you need to have someone that knows what they're doing so they can fix resolve the problem quickly (or better yet, have designed a less faulty system to begin with).

Reading a troubleshooting FAQ and trying the things on the list is not troubleshooting. It's just executing a number of steps that someone else who has troubleshot the issue has compiled for you. What are you going to do if nothing on the list works, and you don't actually understand the system? My guess is you may try to restart the system, or reinstall it, or rebuild it, or some other desparate action that equates to a last attempt to get the thing working.

Unless you're rebooting because you actually have a reasonable idea as to why nothing else will work, then do you really understand how the system works? Have you read through and understood the system logs and other information available to you in order to figure out what's at the root of the issue? If you can't get the troubleshooting information you need from the system as-configured, have you taken steps to increase logging/tracking of the system so you can at least have a chance of figuring out what's going on at some point in the future?

Q: When something goes wrong in your infrastructure, does your most experienced developer or DevOps person suggest that you simply restart the thing?

It is now time to runaway screaming.

Sysadmins troubleshoot things, find root causes, change designs to push reliability ever higher, and make long-term recommendations for systems that will be more reliable, performant, and efficient for your business.

If the most experienced person on your team runs into a systems problem and thinks, "I'll just restart it - that seems to fix it every time," then it's unlikely that they know the system as intimately as they need to know it. Have they looked into the problem? Read the documentation? Posted a question online about the issue to find a solution from an expert? Examined configuration files to see if something is amiss? What, exactly, has been done to find out the actual cause of the problem?

I associate this "just restart it" mentality with inexperienced systems people. Server OSes regularly have uptimes of months or years unless there's a lot of change going on. They're just supposed to run and run and run all day.

If you're using Linux or some flavor of Unix then this is especially true. You're starting out with one of the most stable OS platforms on the planet and if you can't keep you systems running consistently for 30 continuous days, or even a single week, or (gasp) 24 continuous hours, then this is a major problem.

If you investigate the problem and find that it's your custom, internal business software that's causing this reduction in reliability then you should really work with your development team to figure out what's going on and fix it. Software just doesn't need to be this unreliable.

Q: When systems aren't reliable, do people say, "They're not reliable because in order to make them reliable we would need [X] dollars/time/hardware to do it?"

If this is actually true (you're already successfully squeezing above-average business value per dollar spent on your infrastructure compared to similar organizations in your industry), then spend the time and capital it's going to take to push you forward.

If you're getting considerably less value per dollar spent on the infrastructure, then it may be time to take a serious look at your setup and look for chances to improve. Regular maintenance of your systems is critical, so make sure you're also doing the basic, regular, low cost things you can do to get the most out of what you've already got.

Claim: But our systems have never gone down using our DevOps tool!

That's...fantasically lucky.

If your systems have been reliable so far using a DevOps tool that you don't fully understand based on your answers to the questions above, then ask yourself these follow up questions:

Do you know why your systems are reliable? Can anyone explain, in detail that a non-technical person can understand, why the design of the system is reliable?

If not, then you don't really understand your infrastructure. It's time to go get answers to those questions.

If you are constantly throwing away your virtual machines and rebuilding them from scratch, do you know why you're taking that approach?

This approach has become more and more popular, but it can hide the need for someone to become an expert in maintenance: a core requirement of anyone running a business-grade system.

If you're rebuilding servers all the time is it because you tried running and patching servers by hand, and this approach broke down for some reason? Have you already attempted the tried-and-true approach of simple and regular maintenance, rather than scrapping your systems all the time?

What evidence (simple reliability metrics) do you have that your systems have been stable?

Do you track and publish system reliability metrics that everyone understands and can talk about, or are trusting the dubious explanation, "Well, no one has reported a problem," as proof that your systems are actually reliable? If so, be wary. This is going to catch up to you at some point.

Where could I get a large, high quality, supervised learning data set for my machine learning algorithm?

556px-Svm_max_sep_hyperplane_with_margin.png

I got to reading the other day about machine learning. I'm certainly no expert (in fact, more like a casual observer that wants to apply my coding skills to a fun problem), but I think I understand some of the basics about how its algorithms are supposed to work, and the kinds of outcomes that should be possible with supervised learning, given enough quality training data of the right sort.

As a programmer who dreams of building some automated system whereby I deploy some servers, set them to a task, and sit back and collect the earnings, I think, "What I need is to apply this machine learning stuff to some really common, menial task, even if it only pays pennies per hour, and deploy a system that has learned enough to do the menial task for me. Then, I can effectively have a collection of low-cost machines doing work for which I get the proceeds."

So, while I'm thinking of this I wonder, "Where could I find a large set of quality training data? Hell, where could I even find out what sorts of tasks people need to have completed?"

It was only a couple of jumps until my brain hit upon Amazon'a Mechanical Turk to simultaneously crowdsource any client work I might get while collecting a great training set of data for my machine learning algorithms. These people are just sitting there generated huge amounts of ready-made supervised training data...hey, wait a second!

I'm very curious if MTurk is a giant system for collecting training data for a supervised learning system. I mean, what better way to realize the dream of machine-driven wealth than creating the marketplace for connecting requesters to humans, then rating those humans by acceptance rate, and then using the resulting data to train an AI system that would then fulfil some unknown percentage of HITs automatically, all while getting paid to:

  • Serve client requests
  • Train and reward humans to create quality training data
  • Get paid to provide the market place while reaping the benefits in the form of what's probably the world's largest menial task, supervised learning training data set

How more genius it would be to label these tasks as human intelligence tasks (many of them, of course, still are beyond known capabilities of machine learning)? Many of the existing HITs on MTurk could conceivably be done by a machine intelligence that had been adequately trained, even if it was just a small percentage.

That's my theory for the week. Amazon isn't building Skynet or anything, but it could just be building something that would do away with all sorts of menial tasks that need to get done in the workplace.

You don't need exception notification dashboards

I see a lot of excitement in the development world, I see a lot of products that don't need to exist, and I wonder how people are making actual, profitable companies out of them (investor funding doesn't count as profit).

There's so much going on, and it's so easy as a developer to think you're some kind of god with all you can do with computers. So much so that I am now in the habit of constantly questioning whether half of what I write even needs to exist.

You absolutely do not need exception notifier dashboards

I'm not saying you should ditch exception notification - you shouldn't. I'm saying you don't need a web-based GUI with responsive design and mobile apps on Android, iOS, Windows Phone, and Blackberry to know that your site is broken.

I used getexceptional for a little over a year from 2010-2011 (they are by no means the only company in this game - errbit - sentry - airbrake are among the more popular), but I've come to think of apps that capture and aggregate exceptions as being of pretty low value. When I was using them I still wanted to know immediately when an exception was thrown, so I still got email alerts, which defeated the purpose of capturing them elsewhere. I can visually de-duplicate errors by reading through the subject lines of the emails, so the aggregation aspect wasn't terribly helpful. They weren't even all that useful for historical reasons (e.g. to see if there was a code regression) because:

  1. I never went back to read the old exceptions anyway, and
  2. I relied on the search fucntion within my ticketing system to tell me all the details about past exceptions since my exception aggregator wasn't a good place to do this.

If it's a problem of too many exceptions, and an app is throwing so many exceptions so often that I feel I need a dashboard to read them (and an email filter to hide them), then I go home feeling like I'm simply managing the problem instead of fixing it.

No, actually, I feel that I've failed as a developer. This isn't what I studied and worked for years to be doing - bolting on some excpetion aggregation service onto my app because I'm not preventing the problem from occurring in the first place.

Tired of useless stacktraces?

My favorite marketing quote from all these sites is the one on the sentry page under the "Instant Context" heading, which asks the question, "Tired of useless stacktraces?"

No, I'm not. They are incredibly useful tools that tell me precisely, unambiguously, the very place in my code that broke along with the entire call chain that got me there. How are those useless? If I need more "context" then I can go calmly read the code. If you're writing in any interpreted language then there's not even any object or bytecode to decompile to get the original code back from 3rd party libraries. In the case of Ruby, every line of code that your app uses was either written by you, one of your contributors, or is in a gem whose code is also 100% available for you to read becaue the code was never compiled into another form to begin with, native extensions possibly excepted from this.

Am I crazy?

How I slowly moved to Linux

This is the rough order in which I learned how to be marginally
useful/productive using Linux on a daily basis.

2010 - First use of Linux (Ubuntu, specifically) as a software deployment
target. At this point I was able to login, install and uninstall
packages, and make my way around the system.
- apt-get
- date
- uptime
- shutdown/reboot
- cd
- ls
- top
- ssh and ssh keys
- sudo
- grep/egrep
- very basic output piping
- very basic user and group management
- very basic users and group permissions
- chown
- chmod
2011 - First use of Cent OS as a deployment target. Learned more about system
monitoring and task scheduling.
- whoami
- man (I had read manual pages before, but this is where I started to
rely on them more regularly)
- su
- yum
- cron
- curl
- felt somewhat competent in troubleshooting permissions issues
2012 - Raspbian, more Ubuntu, and using the Darwin/OS X terminal. Started to
understand the differences between various distros, learned more about
the user login process and the wisdom (and process) of creating
unprivileged users for various things.
- tty
- watch
- ccze - http://superuser.com/a/438932/25840
- at
- login profile customization
- learned about run levels
- iptables
- more use of output piping
- wget
- tr
- cut
- set
- awk
- rsync
- ssh tunneling
- more focus on unprivileged users
- attempted 'Linux from Scratch' build
- chroot
- very basic file system structure knowledge
- very basic understanding of inter-process signaling
2013 - Learning about bash. Spent most of 2013 thus far learning to customize
my shell and environment the way I like them.
- started keeping and backing up my dotfiles
- wrote first bash-based install scripts
- custom bash functions and aliases
- started using grep to read real-time logs more efficiently
- pptp
- find (beyond just copying/pasting examples online)
- du
- tee
- iotop
- useradd/adduser
- usermod
- userdel
- read my first semi-thorough Linux admin guide to get more familiar
with the OS as a whole, how it was built, and why it works the way
that it does.
- more system maintenance and monitoring knowledge
- more focus on system resource usage, inter-process signaling
- read up a little on the boot process, augmenting my previous work on
run levels
- server-less git server (super simple to setup w/SSH)
- bash script to remotely monitor a dozen or so Raspberry Pis
- wall
- installed personal UPS + auto-shutdown scripts for home server
2014 - (planning)
- 2013 is not over yet, but I hope to focus the rest of this year and
next year on moving to Linux as my primary host OS, and pretty much
living in Linux as much as possible. I have been hesitant to do this
so far simply because, like most users, I am only slowly finding Linux
replacements for things that I do in a deskop environment such as play
music and edit images and video. Web-based and mobile solutions for
the music issues have really helped alleviate that category of
hesitations.
- That said, I absolutely recommend jumping into the deep end with
Linux. The sooner you put yourself in a situation where you MUST
figure out the answer to a challenge the faster you'll learn.
- The power, speed, and consistency of living in the shell is
wonderful, and I expect to continue to focus my efforts there. I have
also had tmux and other shell power tools recommended to me, and will
likely investigate a number of them in the future.


Fun facts about my programming history:
=======================================

I'd started using computers before 1992,
but 1992 is when I started coding.

--> more platforms used regularly
<-- fewer platforms used regularly
--------------------------------------------------------------------------------
1992 + Apple ][e (actually, a used Franklin Ace 1000 @ age 12)
|
1994 + MS-DOS (started using)
\
1996 + MS Win (started using)
/
1998 + MS-DOS (stopped using on a regular basis)
|
\
2001 + Mac OS 7-9 (started using/supporting)
2002 | Mac OS X (started using/supporting)
\
2004 + Linux (early, mostly failed, experiments with Mandriva and BSD)
|
2006 + MS Win (started using/supporting desktops+servers)
/ Mac OSX (becomes primary, desktop environment; stopped supporting)
2008 + Sold my last Windows machine
\
2010 + Linux (started deploying Ruby to Ubuntu servers)
2011 + MS Win (stopped using/supporting all versions)
| Linux (started deploying to Cent OS in addition to Ubuntu)
2013 + Linux (got serious about living in the shell)
| Linux (moved to vim full-time for development)
| (November) Linux now feels more natural than any other OS
| (November) bought my first dedicated Linux laptop

--------------------------------------------------------------------------------

List of scripting or programming languages/tools I've learned to use at one
time or another. Some of these are pretty embarrassing, but hey - the folly of
youth.

- MS BASIC on the Apple ][e
- FORTRAN (briefly)
- QBasic
- Turing (yeah, there was a language named after Alan Turing)
- MS batch scripting
- Borland Turbo Pascal
- C/C++ (both the MS sort and the gcc-compatible sort)
- 68000 Aseembly (Motorola)
- Scheme (mostly for a college class)
- QuakeC (briefly)
- Java
- BlitzBasic
- AppleScript
- PHP (briefly)
- Ruby
- Python (I'm barely claiming this because I've only written a few hundred lines)
- bash scripting

Deployments and automated tests; business vs. engineering

The challenges are as old as time. When do you deploy and when do you test more? When should business drive deployments and when should engineering drive them? If business is demanding something, should engineering just do what they’re told, or put a stop to it?

Read More

More D3 wackiness

UPDATE 6/22: More information has come to light as Blizzard clarifies it's position today. See the end of the blog post for our follow up emails which take this information into account..

===================

Just my friend, Dan, and I talking over email today about the Diablo III "72 hour trial" mode that has been making its way around the gamer news, and how we think it connects to some broader trends in the gaming industry. We're both old school gamers from the 80s to the modern day.

There is no "TL;DR" version of this conversation. Sorry, just gonna have to deal.


========= Dan: ===========

 

 
So…one of the commenters said it takes roughly one hour to get to the imposed level and story cap.  I have to laugh, because otherwise I’d be legitimately pissed, I think, because this really seems shockingly absurd.  Another commenter said (paraphrase) “So, I have to wait 72 hours to play a game I paid $60 for, but I only have to wait 48 hours to own a handgun?”
 
Thoughts?

 

========= Me: ===========

 

Wow. I don't know how I can take any kind of defensive position on that. Even I am starting to think that alternative games might be a good way to go (Torchlight 2, anyone - which is essentially a clone of D2 gameplay, but with updated graphics - it literally has the same soundtrack composer as D2). This also adds fuel to the fire for people who want a single-player experience. There's nothing you can say to those people that are going to make them happy.

I'm converted. The "online required" features in D3, and everything that comes from it, are a complete mistake.

What if I bought the digital version, installed it on a Wednesday, played for five minutes, and then didn't pick it back up until Friday? Would that work? Would it be unlocked? Why has a game developer put me in a position to even have to consider that question?

Is D3 in a race to anger more players than ME3 and Skyrim combined? ;)

 

========= Dan: ===========

 

Who knows, seriously?  None of us knows what their criteria are for being a “legitimate” player.  It will be very interesting to see if people are somehow banned from play even though they weren’t cheating or pirating, but the sad truth is, they’ve sold several million copies already so they’re in a really weird position of power.  A lot of industry people hoped the game wouldn’t sell as well as it did because of all these preposterous restrictions and limitations they imposed, but the underlying message here is that it doesn’t matter who you piss off, your games will still sell no matter how badly you inconvenience the player or how many hurdles you put in front of it.  And again as you implied, if they would just allow a fugging single-player offline option then they could easily lock the co-op for days or a week to secure their supposed piracy losses and keep the “cheaters from spoiling everyone else’s experiences” and it wouldn’t be that big of a deal.

I’m hoping that there are some actual repercussions here though, because even though I want to play D3, I’m perfectly happy to disregard it entirely so I don’t have to tolerate these kinds of things, not to mention for principle’s sake and not supporting a company who does these kinds of things to their customers.

Speaking of which, after I finish all the humble bundle games, I’m probably going to play Torchlight, so by the time you’re ready maybe instead of us going through D3 together we’ll just play Torchlight 2 instead.  What do you think?  J

 

========= Me: ===========

 

I think it's inevitable that their sales are affected by this decision - it's just that Blizzard has so many ravenous fans that the loss in sales won't be enough to really ruin things financially for them, and it's going to be very difficult for them to measure that loss. The loss in trust is much harder to quantify, and lasts a much longer time.

I agree that there should be repercussions for this. I'm not calling for anything crazy - just something customer service focused. I'm not sure what, but if they wanted to get the trust back, I think they should go as far as offering full refunds to anyone who decides they don't want to play along with this. In other words, a public statement saying something to the effect of, "We understand that many people are concerned with the co-op restrictions, and we're happy to let you vote with your dollar. If you feel strongly enough about our policies that you'd like to walk away, we understand. When you buy the game digitally you will have 72 hours in which you may request a full refund." That of course doesn't solve the problem that you'd still have to wait the 72 hours, but it's something. By then the skinner box has probably sunk its teeth into you, however, and you're unlikely to ask for your money back. I'm really hoping that this 72 hour thing is a stop-gap measure. As cool sounding as a "real money market" sounded when they were talking about it, I think they've put themselves in a position where people can literally create valuable things from nothing, and therefore create money from nothing. I think that alone (in addition to D3's general popularity) have made D3 a massive target for all kinds of mischief. This whole mess is unprecedented in video games, as far as I know. I know they had to really crack down on some stuff due to hacks in WoW, but those crackdowns came very slowly, and over years of gameplay. D3 has only been out for what, five weeks? I think it was probably a mistake to announce a real-money market from the start. They probably should have left that announcement out until after the major post-launch networking and gameplay bugs were sufficiently crushed.

Then again, I don't know how Blizzard's support and engineering teams are structured - it's possible that they have separate people doing separate things (managing user accounts, managing the market, managing gameplay servers), and that it wouldn't matter one way or the other. It just sounds like there's too many critical problems to solve week after week. Very few of the massively covered issues have been anything but critical flaws and hacks. Like Skyrim, most of the news is about game-breaking features on one level or another.

I think you're right about the weird position of power thing. This policy doesn't affect anyone who has been playing already for at least three days, but it seems like a major mistake to put something like this in place that affects all new digital customers. [...irrelevant rant about GameStop, redacted...]

I'm basically in a position to not play D3 at all until, and unless these restrictions are lifted.

I'm actually 100% okay with playing Torchlight II instead of D3 (I checked, it's not out yet on PC, and there is a Mac client planned). The first Torchlight was really fun, even though it offered only a couple of new things over D2. At this point I'm willing to indefinitely put off D3 - something that I never thought I would hear myself saying.

 

========= Dan: ===========

 

[...Dan's response to my irrelevant rant about GameStop, redacted...] I see your point about waiting until they pull their heads out of their asses.  I intend to do the same. 

One thing that really bugs me about issues like Skyrim, D3, and SFxT have had is the seemingly almost complete lack of transparency.  I get that the customers can choose not to buy a product, but it seems like it’s impossible to get anyone besides a PR stooge to really discuss any of this kind of bullshit directly.  The PR releases could have been easily written with cut-and-paste from any entertainment industry’s response to a fiasco, and they don’t seem to really address the issue.  It’s hard not to conclude anything but greed is at work here, and it’s extremely discouraging. 

And honestly up to this point, there’s not been a good reason to purchase the retail version, because from what I understand it’s just a disc in a sleeve in a box.  Maybe that’s precisely what Blizzard wants because they want to save money on packaging costs, but if that’s the case, then again, it points to greed, since they knew they’d sell at least 2 million based on pre-orders alone.  I wouldn’t object to this at all if it meant that digital-only truly meant more convenience, but obviously that’s not the case.  It’s the definition of inconvenient to have to be at the mercy of the server and your internet connection at all times, to be forced to wait 3 days to play the game you paid for, to have to be concerned about hacking, and to worry that Blizzard might decide you’re not playing fair enough based on criteria they don’t disclose and that you’ll be banned from using the product you bought.

All the industry bigwigs are saying video games are heading in the direction of being distributed digitally only, and I think they’re basically doing everything they can to force that into reality.  It sounds ridiculous but all I can think of is some Dickensian dystopia or something, where the gamers are having to ask, “Please sir, may I have the right to play the game you made that I paid for?”  If digital-only distribution means the death of any product ownership whatsoever, and if Blizzard and EA are the ones setting “acceptable” precedents, then it feels pretty damn bleak.

Hopefully we’re all (gamers, developers, and publishers) still just scrambling around trying to strike a proper balance between Big Brother-esque lordship over gamers’ spending and playing habits and freedoms vs. the old static method of manufacturing physical media for resale and rentals.  And hopefully Blizzard and EA will end up being object lessons on what not to do in the digital media age, but right now it sure doesn’t feel that way.  :/

========= Follow up email from Dan: ===========

From techspot.com:

In somewhat related news, Blizzard announced that it will offer refunds to unhappy South Korean Diablo III players. The decision comes nearly a month after Korea's Fair Trade Commission began investigating complaints about the game's server issues. At the time, Blizzard refused compensation, but that violated Korea's consumer protection law, which guarantees refunds for defective products.

Blizzard has since addressed the connectivity issues by deploying more servers in the region. Players who remain unsatisfied will be able to apply for a refund between June 25 and July, as long as they're below level 40. Players below level 20 will be able to get a refund within 14 days of purchasing the game from now on. Additionally, Blizzard will offer a 30-day free trial of StarCraft: Wings of Liberty.

 

I guess the lesson here is to try to bring our consumer protection laws up to South Korea’s standard?  Maybe this is one of those repercussions we were discussing earlier today… 

========= New email from Dan on 6/22: ===========

[NOTE: Dan works for a financial institution, which is why he knows a lot about how banks process payments] 

http://www.destructoid.com/diablo-iii-s-new-user-restrictions-further-explained-229965.phtml

My theory here isn’t that Blizzard “misspoke” as they claim, it’s that they saw the tidal wave of rage from the internet, and backed it right the fuck up, then claimed it was an error.

Another point of confusion regarding all this is that their presumably self-coined term “unverified digital purchaser” implies that the 72-hour period is to allow them time to verify the validity of a digital purchaser, but the very nature of online card transactions in the world of finance is that they’re POS transactions, not ACH, meaning it’s the same as a card swipe, so the credit card company is accepting the responsibility for the transaction immediately upon issuing the authorization for funds transfer, not the merchant offering goods or services.  If that’s the case, then the only way Blizzard could really be combating fraud and cheating by doing this is if they were having a large enough volume of people perpetrating credit card fraud solely to purchase D3 and farm gold or start hacking other accounts.  I realize Blizzard is huge, and 6 million-odd players is a glowing target of a user base for fraudsters, but I find it very hard to believe the problem could be so prevalent as to justify what in practice turns out to be punishing your good customers in order to catch the bad.

Alternatively, if Blizzard allows direct withdrawals from savings or checking accounts to pay for digital purchases, then there’s already a waiting period for verification of the validity of the account in question (which is why whenever you request an automatic debit from your checking you see 2-3 transactions of less than a dollar, and have to report those amounts back to the originator), so it still makes no sense to have an additional waiting period. 

I’m truly baffled at this point, and I appreciate Blizzard’s promptness in stating that people just wanting to play aren’t locked out of the single player game after all, but either they lied or they didn’t, and that makes them either mythically, shockingly stupid, or merely dishonest and somewhat power-mad, and either way that doesn’t inspire any confidence from me to give them $60 to play their game.

 

========= Me on 6/22: ===========

 

I see the kind of thing you're describing all the time in tech. When you want to explain why something bad happened, you convert it to some lamen's speak to make it make sense, but you also hide a lot of the details as a result. Your analysis of what this might imply on the backend about how they process payments or verify identity, whether accurate or not, is the kind of things that only comes from having specific industry knowledge of payments (which you happen to have in this case, and I at least understand from working with banks).

The whole thing is interesting to watch, and overall I still agree that this doesn't inspire confidence.

So when I read the list of restrictions from today's article, what I see is an attempt to specifically block common behaviors of the bad actors in the system. I agree that it punishes good players to catch the bad. This is a challenge in nearly all behavioral analysis in technology, and in many cases there's no better way to do it, at least at first. The challenge is that, if the value you get from breaking the system is gained in the first hours or days of behavior on a system, then you have to find a way to stall or prevent certain actions upfront. Waiting to see if someone exhibits the behavior after they've already gained the value (in-game gold or real money, in this case) is like pursuing a bank robber after you know they've already fled to a non-extradition country.

The other way to approach this sort of problem, in general, is to keep analyzing the behavior over a number of weeks or months and refine the behavioral blocks and filters such that the good players are less restricted, and the bad players are caught and/or prevented from breaking the system sooner. If Blizzard eventually lifts many of these restrictions while fraud in the system continues to decline, then that would be my best guess as to the approach they are taking.

That kind of stuff is pretty hard. I'm not giving Blizzard a pass here (and this is all speculation, of course, since we don't really know how Blizzard has come to this list of restrictions), but it does take time to develop the proper approach.

I think I'm pretty much done with this topic. D3 is effectively off my radar for the foreseeable future, so the damage has been done for me.

 

Switching away from Google Tasks: task context

I tried to abandon Google Tasks, going instead with todotxt, but eventually ended up on Wunderlist. Here's my story.

I've used Google Tasks and the tasks pop-up inside of GMail for years. It's been pretty good, and solves my basic problem: keep a unified list of "to-dos" and allow me to sort them by due date. It's been fine, except for one little thing: task context - the ability to see tasks by my physical location, and any other arbitrary context I want.

At first, todotxt seemed like the way to go, for a number of reasons. First, it's just a plain text file, and therefore "future proof" as it claims. While I can't deny the appeal of such pure simplicity, and the ease with which you can organize things into projects and contexts, I found that "future proof" didn't matter to me all that much. I don't have any fear that "to-do" applications are going to go away, and the other features (such as a historic report of tasks completed) just didn't matter to me. I also don't have a DropBox account, and while I know it would be dead-simple to set one up, it just seemed like extra work. However, I persevered and decided to give todotxt a real chance, because it's a command line tool, it syncs everywhere, and I develop on both Mac OS X and Ubuntu so I knew my list would easily port between platforms.

So, I installed it. I then spent the next hour playing around with my config file to setup colors and such, adding tasks, marking tasks as complete, and exploring all the features it had to offer. I even moved the entire list of my current to-do's into todotxt so I could get a feel for the real workflow on actual tasks I needed to complete.

Well, something didn't feel quite right. I didn't think that todotxt's operators and sets of commands were all that natural. They certainly fit well into the command line world, I just didn't care for all that typing to manage a list, and it didn't seem perfectly natural to me.

Enter Wunderlist

Wunderlist had all the things I loved about todotxt - keybaord shortcuts, easy context setting and searching, the ability to mark things as complete, without reverting to my mouse - except, I feel that Wunderlist does things just a little bit more smoothly.

First of all, I'm pretty much always in a place where I have a data connection (even when mobile, I tether to my phone), so it's trivial for me to manage tasks on my laptop most of the time.

Second, even when I'm on the go, the Wunderlist app for Android more than suits my needs, and so I don't actually need to be on a computer in order to access/manage my list. I realize todotxt has an app as well, so they're pretty equal here.

Third, while todotxt's use of special operators (+ and @) to mark things with context is fine, I didn't want to be limited by the operators that todotxt thinks are appropriate. Wunderlist already has awesome search (also accessible via keyboard shortcut), so it's trivial to search for things like "@work", and get all the tasks that I've tagged as "@work". The fact that I'm using the @ symbol is arbitrary and flexible should I ever want to use a different symbol because it's just plain text search, nothing complicated.

A single, unified list for everything I do...finally

This is incredibly important to me, because I don't like managing multiple lists - my to-do list is mine, meaning it's everything that I have to do. Why do I need more than one list - I'm just one person. That means a single, unified list of everything I need to remember to do. All I was missing with Google Tasks was the ability to filter based on my current location/context, in order to keep my tasks relevant to where I was and what I was doing. Wunderlist totally solved this for me.

Not only did Wunderlist finally solve my context problem, it took a lot less time to setup. I spent a couple of hours playing with todotxt in order to get a feel for it, but I was up and running with Wunderlist in a few minutes.

Lessons from @torvalds/GitHub commits discussion

The heated conversation that occurred around @torvalds' comments (that GitHub commit messages ruin this format) was lost on many. To the casual reader it might have even appeared as a pure rant. However, there are several jewels in this discussion that hit at the heart of why Linus feels the way he does (not to mention, he thinks that GitHub is doing lots of other things fabulously well).
Before I start, let me be clear that what I'm writing here isn't about the tone of the conversation. My conclusions come from trying to examine the points presented throughout the conversation, while trying to strip out the rhetoric.
So, here's my lessons leared from that discussion, and why I'm convinced that I'm going to start conforming my commit messages to that format.
The premise for formatting in the first place:
  • The formatting of commit messages matters all the time, but especially when you're working with the tools that come with git (git log, shortlog gitk)
  • One of the most important things these tools (and commit messages in general) can do is communicate what is changing in code.
  • The commit message should be considered (and composed) as a highly efficient communication. The efficientcy (not just brevity) of the message, helps you effectively communicate with developers, whether that be other people or your future self.

So, given that commit messages are about communicating effectively and efficiently:

  • It follows that a formatting standard can result in many efficiency gains in collaborative software simple because we're all aware of what happens to readability when we don't follow the standard.
  • Formatting (50 character title, 72 characters for each line in the body) keeps things readable, no matter where the message is viewed, in GitHub, or the tools that come with git.

So, the problem with GitHub pull requests is:

  • Especially when they are created via the web UI, the standard formatting isn't enforced, or even encouraged. Readability (and therefore efficiency of communication) suffers as a result.
  • The sheer volume of commit messages one must deal with on any popular project (the kernel, in this case) really demands that a simple, easy to follow format be conformed to. To draw an analogy, if you've ever been in a role where you have to handle hundreds of emails a day while maintaining inbox zero (maybe you're getting a lot of alerts/monitoring messages from servers you manage), it sure does help to have all of those emails conform to a common format so you can deal with them incredibly quickly, efficiently, and therefore be able to easily divide the signal from noise.
  • The formatting standards provide efficiency gains to all projects, regardless of their size. As your project (and collaborators, and the volume of your commit messages) increase, the amount of time you save by conforming to the standard far outweighs the time it takes to write good commit messages.

The reason I'm going to start writing commit messages that conform to the standard:

  • It's doesn't hurt, and really, it's an easy format to follow.
  • It's going to save me time in the long run (even if I don't care about other people for some reason).
  • Given Linus' experience, and since he's probably one of the most experience git users around, I trust that this format isn't something that's developed because he thinks it's cool - it simply gets the job done in an incredibly fast and efficient manner.
  • I'm not a Linus acolyte - in fact I know little about him other than what pretty much everyone knows about him. I've decided to make the change because I can see the value in following the format.
  • I work on a team of other developers, and I want to make their lives easier, and encourage us all to adopt practices that help us do our jobs with less overhead.

 I'm really glad that I read through the bulk of this discussion - it made me reconsider how I do one of the most common tasks I do every day when writing software.

Starter 2D games with Slick2D: drawing the screen view

If all you're doing is drawing sprites in a 2D world, then there's basically two things you need to keep track of to decide which sprites to draw on the screen, and where on the screen to draw them. You have to think of your sprites as existing at a certain location in the world, and what you see on the screen as just one view of the world, focusing on an area.

The two things you need to keep track of are:

  1. Each sprite needs to have its location within the world
  2. Your "camera" needs to track its location relative to the world.

So, let's say you have a big, big world, with a 2D coordinate (x, y) space of 1,000,000 x 1,000,000 pixels (I'm using pixels as the unit of measure here, but that's an arbitrary choice, and the size of the world doesn't matter, I've just chosen a big one). Then, let's say you have a "camera" that's pointed at that world, and the view of that camera is what is displayed on your screen. In our example, the display that camera gives you is going to be 1024x768 pixels in size.

Let's also say that you use the arrow keys to move that camera around the world.

So, your world's coordinate space maps to your screen as such:

(0, 0)        +x
      +------------------>
      |
   +y |
      |      *  <= example sprite in your world @ coordinates (x=200, y=200)
      |
     \ /

When your sprites move "right" they increase their x coordinate. When they move "left" they decrease their x coordinate. When moving "up" they decrease their y coordinate (because y increases downward, on monitor displays), and when moving "down" sprites increase their y coordinate.

Now, again, what you see in our screen is just the camera's view of the world. So, let's set that the camera's upper-left corner is at (x=500, y=500). That would look something like:

(0, 0)        +x
      +---------------------------------->
      |
   +y |
      |      *  <= example sprite in your world @ coordinates (x=200, y=200)
      |
      |
      |         +===================+
      |         |     the area      |
      |         |  that the camera  |
      |         |    "sees" and     |
      |         |   shows on your   |
      |         |       screen      |
      |         +===================+
     \ /

With that setup, let's say that the camera is at (500, 500) (that is, the upper-left corner of the camera's view, as shown in this example, is at the world coordinates (500, 500). And because the camera shows you an area that is 1024x768 is size, then the opposite, lower-right corner is (500+1024, 500+768) = (x=1524, y=1268).

Note that the sprite in our world is not inside that camera's view area. That means, when we render the camera's view on the screen, we won't see the sprite.

If, instead, the camera moved to (200, 200), then the view area of the camera would cover the world coordinates from upper-left @ (200, 200) to lower-right @ (1224, 968), and look something like this:

(0, 0)        +x
      +---------------------------------->
      |   
   +y |  +===================+
      |  |                   |
      |  |   * <= sprite     |
      |  |                   |
      |  |                   | <= camera's view of the world
      |  +===================+
      |
      |
      |
      |
     \ /

When the camera is in this position, the sprite is visible. If the sprite is @ (500, 500), and the camera is at (200, 200), then when we draw the sprite on the screen, the sprite will appear, on our screen at coordinates 300, 300.

Why?

Because, and this is really the answer to your question, where you draw things on the screen is sprite's world location (500, 500), minus the camera's location (200, 200), which equals (300, 300).

So, to review:

You move the camera's position around the world using the arrow keys (or the mouse, or whatever other control scheme you want), and you render the sprites location relative to the camera position, by taking the sprite's position and subtracting the camera's position, and what you get are the screen coordinates where the sprite should appear.

But there's one more thing...

It's really inefficient to draw every sprite in the world. You only need to draw the sprites that are within the camera's view, otherwise you're drawing things that you won't see on your screen, and therefore, wasting rendering/CPU/GPU time.

So, when you're rendering the camera's view, you need to iterate through your sprites, checking to see if they are "on camera" (that is, whether or not they're within the view of the camera), and only drawing them if they are within this view.

In order to do that, you have to take the dimensions of your camera (1024x768, in our example), and check to see if the sprite's position is inside the rectangle of the camera's view - which is the position of the camera's upper-left corner, plus the camera's width and height.

So, if our camera shows us a view that's 1024x768 pixels in size, and it's upper-left corner is at (200, 200), then the view rectangle is:

(200, 200)                      (1224, 200)
           +===================+
           |                   |
           |    *              |
           |                   |
           |                   |
           +===================+
(200, 968)                      (1224, 968)

The sprite's position @ (500, 500) is within the camera's view, in this case.

If you need more examples, I have a working Slick2D tech demo, called Pedestrians, that has code you can look at. For details on how I calculate the area of the world that should be rendered, look at the render method inside this file, and pay special attention to the startX, startY, stopX, stopY variables, which for me control the area of sprites I'm going to draw. It should also be noted that my sprites (or "pedestrians") exist on a TileMap, so they aren't 1 pixel is size - they have a width and height of their own. This adds a small bit of complexity to how to decide what to draw, but it basically comes down to, "draw what's within the camera's view, plus a little extra around the edges."

Only through practice and study will you get all the little ins/outs of how this works. The good news is that once you figure out how to do rendering with this basic 2D world vs. camera method, you'll pretty much know how to render graphics for all 2D apps, because the concepts translate to all languages.

I also have various videos of Pedestrians being run on my YouTube channel (the most relevant video is probably this one, which shows my basic pedestrians being rendered, and the camera moving around), so you can see what this all looks like without having to build the project first.

Slicehost, AWS, and Heroku: Looking back at two years of deploying independent Rails apps

I started writing independent Rails apps on my own two years ago this month. One of the key decisions I needed to make early on was where to deploy my apps. After two years using three of the biggest players (Slicehost - now part of Rackspace Cloud Hosting, Amazon Web Services, and Heroku) I give you my take on the experience and toolsets available.

Slicehost - my first Rails deployment environment

Slicehost has a bit of a cult following. I came onto their service after they'd already been acquired by Rackspace, but despite that I felt very connected to the developers and sysadmins that ran it. You could literally jump into an IRC chat with the sysadmins on duty if there was a problem, and get help from the experts immediately.

I ran an application on a VPS from them for 18 months, had only a single service issue (which was handled quickly and professionally), and I basically didn't have to do anything to fix it other than wait for the sysadmins to do their thing. I also spun up several differently sized slices during that time (some for just a day or two for testing), and also seamlessly migrated slices between various sizes. The last time I had a VPS with Slicehost was June 2011, but the experience was great, and I'd recommend them to anyone who wants to run their own server in the cloud without a lot of frills. It's just the developer, and their server: a simple partnership.

Heroku - the slickest app environment around

Heroku. It is by far the easiest way to deploy a Rails app without thinking about infrastructure. Their suite of plugins (and how ridiculously easy it is to add/remove them on the fly) has no match. They also after completely free hosting for small apps with simple needs: a basic database, and a single dyno (web server instance).

It took me a while to try Heroku, but when I did, I found their support documentation to be up to date, detailed, and straight forward. I had no trouble at all getting my first app running on Heroku in less than an hour.I currently run a small app (rpglogger.com) on Heroku, and basically haven't had any issues. That being said, one thing that new developers run into with Heroku is the occasional gem incompatibility or obscure application error. I haven't had too much trouble along these lines, but I have, once or twice, had to do a quick Google search on an application error that only happened after deploying to Heroku, only to find that the solution was very simple: change a config setting, or update a gem. There's a big community of developers on StackOverflow that deploy to Heroku, and pretty much any problem you might have on their platform is documented by someone else who already ran into said problem.

Amazon Web Services - ultimate control

I do quite a few small apps in my spare time, and I find that AWS reserved instances are perfect for this. Since I already had experience running my own server on Slicehost, it was a pretty easy decision to pay only $60 to get a reserved server instance for an entire year (I was paying $25/mo on Slicehost at the time for the smallest slice). Just one micro server on AWS is enough to run 2-4 small apps on the same machine, with multiple web server instances (vs. Heroku's one free dyno), and still have a little memory left over. It's also hard to beat the availability and global deployment footprint of AWS. Since you can get any size server, AWS works equally well for any number of systems, including the independent app on a single server.

On top of the initial cost for the reserved instance, I only have to pay for bandwidth and other AWS usage (S3 storage, for example). This makes AWS an amazing mix of ultimate scalability, great costs, and ultimate control.

Rackspace Cloud Hosting provides similar services, and they're probably comparable for most things. But if what you want is the Swiss army knife of cloud services (servers, load balancing, storage, database, VPN, and even NoSQL options), I think AWS is still way ahead of everyone else.

Conclusions - right tool, right job (no surprise)

So, I started on Slicehost, then tried AWS, then Heroku, and today I'm back on AWS for most of my work. I can't really say too many bad things about any of these services. I think they serve different priorities very well, and that all of these services deliver on what they say they can do:

  • Slicehost for really simple VPS
  • Heroku for the slickest app deployment in town
  • AWS for all the power and flexibility you could want

Continuous Delivery reading and resource list

This is a re-post of an article I wrote for NUBIC:

My technical project for the year, I've decided, is to build a continuous delivery system inside the NUBIC dev team.

Here's a quick reading list of source materials that I'm using to learn how to do it (blog posts to follow as I document the process of building the system internally):

These things couple very well with additional practices that NUBIC embraces as part of its software development process, including:

Finally, ThoughtWorks Studios has a commercial product called Go for automated release management. A couple of people from ThoughtWorks also happen to be the authors of the book on continuous delivery.

The most expensive calls to make via Google Voice from the U.S.

From the bag of random...

A year or two ago, North Korea was the most expensive place to call via Google Voice from the U.S. A quick scan over the latest rates show some interesting options, such as Antarctica, and calling several satellite phone customers.

The most expensive places to call (per minute) via Google Voice, from the U.S.: 

  • $6.90 - Satellite Service - Inmarsat
  • $4.99 - Satellite Service - Thuraya
  • $4.03 - Satellite Service - Iridium
  • $2.29 - Netherlands - (paging service)
  • $2.00 - Antarctica
  • $1.90 - Ascension Island
  • $1.70 - San Marino - Conference Services
  • $1.39 - East Timor
  • $1.30 - Diego Garcia
  • $1.19 - Sao Tome and Principe
  • $1.09 - Niue

 

Why RTFM doesn't work

This is a re-post of the article I originally wrote for NUBIC:

It's not the users' fault. Honestly, it's not.

When answering a technical support question, have you ever asked someone, "Did you read the manual?" Well, put away your superiority complex for a moment, and realize that your users are wondering why they need a manual in the first place.

Manuals stink, plain and simple, so stop using them whenever possible. If you've got a complex application, website (or really any training process whatsoever), and you feel that you aren't receiving the respect you deserve for writing that 900-page, 100% comprehensive training manual, stop spending time on trying to improve the manual, and instead change the system.

Here are a few things you can try that are very simple, and very effective:

  • Ask why - first and foremost you need to understand why the user is having a problem with your application, then you need to correct the flaw that is causing the problem in the first place, thereby eliminating the need for the user to ask the question at all. A great method accomplishing this is 5 Whys. During the process of asking "why" it's important to always be gracious about honest feedback, and curious about how people arrive at their state of confusion. Once you've figured out what's at the root of the problem, it's usually a trivial thing to change it.
  • Show, don't tell - create a short training video that shows people how to use it, rather than trying to explain it via text and pictures. If your training video can't correctly explain it in less than three minutes, your app is either too complex, or your video is trying to do too much. Either fix your app, or sharpen the focus of your video. Great examples of awesome instructional videos are the videos that introduce SquareSpace. They are short, focused on a single topic each, and (in the case of SquareSpace) linked directly from the pages in which the related question might be raised in the user's mind. A user is editing a webpage and wants to know how to add an image? The video for editing pages is linked from the page editing screen. Simple. It's true that they still maintain a searchable collection of videos that any user can simply watch, but the fact of the matter is that pretty much no one is going to go through this library and watch all the videos first. Users will typically try something, and only when they fail, will they ask for help.
  • Protect users from accidents - There are many times that users will do things that they don't know are dangerous until it's too late, and they can't go back! Whenever possible, provide an "undo" function that allows users to fix mistakes with a simple click or keystroke. This method is often far superior then shifting all responsibility to the user, and presenting them with, "Are you sure?! You cannot undo this!" sorts of messages. Those messages make users fearful, cause them to stop and call you for help making a decision about what to do, and ultimately shift blame to the user when simply providing an "undo" function largely avoids the problem from happening the first place. Even the most seasoned users will occasionally make mistakes. These people aren't "dumb," and they're just human after all. Do you really want to have to recover lost data, or blame them for the mistake, when your system could simply protect users from such accidents in the first place?
  • Automate it - sometimes people make mistakes when doing repetitive tasks, because humans aren't as good at doing highly repetitive things accurately 100% of the time, as compared to computers. This problem is exacerbated by processes that have multiple steps, where a mistake in any one of the steps can cause the whole process to break down. Try helping the users of your site or application by pre-filling in values for forms, automatically inserting reasonable default values, or better yet, just completely automate the process whenever possible. If there's no reason that a human really needs to be involved in a process, take them out of the loop and save everyone some time and energy.
  • Language is imprecise - step-by-step instructions, no matter how detailed and precise, no matter how carefully worded, are difficult to follow. Users gets lost in lengthy instructions, misunderstand or misinterpret technical terms, and people simply don't want to read instructions anyway. Providing users with a glossary of terms (thinking that the manual should explain itself) isn't really the answer either. So, use pictures instead of words when possible, and video instead of pictures when possible. The complications of interpreting language is part of why IKEA's assembly instructions contain no words, only pictures.

 The introductory page that explains how to avoid damaging your new furniture during assembly, and what to do if you need help or are confused. Pretty clear, yes? (1) put a carpet or rug under the pieces while assembling them, (2) if you're confused, look in the manual for a picture that shows what to do, and (3) call IKEA. Note that the last picture isn't a person on a phone calling IKEA - it's literally a handset connected to IKEA. When I see this, I think only two words: "phone IKEA". The implication is uncomplicated, and clear. Also note this caption of those four pictures took an entire paragraph. Not very efficient, friendly, or helpful, is it?

SQLite3 locking and database busy messages

Here's an old, but good post on understanding SQLite3 concurrency, threading, and simultaneous writing issues. It's something that many developers, especially developers of embedded and Rails apps that haven't migrated to another database such as MySQL, PostgreSQL, etc., have run into time and again.

The most confusing part seems to be that SQLite defines "concurrency" support as trying to be really fast, and therefore not holding onto exclusive write locks more than a few milliseconds, but that's not really the same as true, concurrent writes across multiple threads.

I continue to get the occsaional up-vote on this post, so it appears to be providing some long-term insight and value. Although the question is tagged as iOS, the answer is applicable to any use of SQLite where multiple threads are involved.

I am actually a big fan of multi-threaded code and code libraries in general, but it's a thing best weilded after acquiring some real-world experience (and failing a few times in order to understand the complexity of it). Until then, if you need multi-threaded access to a database, swap out your DB with one that handles this for you, rather than trying to figure out how to make SQLite concurrent write capable. If swapping out the DB is, for some reason, not an option for you, and you're only viable option is SQLite, then you need to stop trying to do multiple concurrent writes (change to a single-threaded app), or you'll drive yourself insane.

And for some multi-threading laughs, this extranormal video is awesome!

Install Growl for Linux and gntp-send on Ubuntu 11.10

I use gntp-send for Growl on Linux, in Rails development. Here are the instructions that I followed that seemed to work:

Add two additional software sources to your `apt-get` repositories

    sudo add-apt-repository ppa:mattn/gntp-send
    sudo add-apt-repository ppa:mattn/growl-for-linux

Download the .tar from https://github.com/mattn/gntp-send/downloads

    tar -xvf [name of the tar file you downloaded]

Install the compiler dependencies. I had to install the following in Ubuntu 11.10, but watch for error messages indicating that you need additional libraries.

    sudo apt-get libtool automake
    ./autogen
    ./configure
    make -f Makefile
    sudo make install

Install Growl for Linux (this is the easy part)

    sudo apt-get update
    sudo apt-get install growl-for-linux

Finally, start Growl manually (it doesn't appear to startup automatically)

You should start receiving alerts, if everything has gone well.

UPDATE: After using this for a few weeks, it seems that I often have to close/re-open growl after the machine sleeps, or other system events seem to happen that cause the growl alerts to stop appearing. I haven't yet looked into what the specific cause is, so I don't have any more information than that, but I figured I'd let people know that this isn't a 100% solid setup yet.

Simply running growl again seems to fix it, but it's rather annoying that it does't just work as-is.

Time-based and sub-pixel movement in 2D games

For many people writing their first 2D game, the concept of time-based movement (#of pixels per second) vs. frame-based movement (#of pixels per rendered frame) can be very confusing, especially because programmers often are used to making fixed changes per loop iteration.

So, here's my answer to this age-old question on StackOverflow. It explains why you have to do time-based movement for sprites/objects in your game, and why time-based movement is the only way to get a consistently smooth experience while frame rates go up and down.

Finally, the question (and answer) reference using Slick2D, an awesome cross-platform (Windows, Mac, Linux...and hopefully Android some day), 2D Java game library that exposes hardware accelerated graphics, OpenAL for environmental audio, and also takes care of handing input.

Don't overthink it: You cannot predict the future

You cannot predict the future

Wouldn't it be awesome if we could predict the future? We could perfectly play the stock market, we would know if we would pass a test in school, and we would know what the weather would be like tomorrow. Somehow we haven't yet managed to figure this out. Why not?

You can't be too specific. Take, for example, the weather. It would be great if we could predict exactly what the weather would be like tomorrow, next week, next year. The most useful prediction of weather would be a personalized weather plan, based on where we would be, at what time, and therefore whether or not we need to bring an umbrella with us. Yet, we cannot predict what the temperature in a specific city will be just 24 hours from now with 100% accuracy. This happens despite having detailed weather pattern data going back over 100 years, fancy weather maps with lots of colors, and weather people that have a degree in meteorology. Perhaps we're getting better and more accurate over time, but right now, one of the problems with predicting the weather is that we all want the weather forecast to be correct for us, just one person among billions of people on the planet. If you look at weather over a much wider area, such as the midwest United States, you can much more accurately predict scenarios such as, "Will it rain in the plains states tomorrow?" but you can't necessarily predict much more specific details such as, "How many inches of rain will fall?"

Some things don't happen like clockwork. As much as we might want to be able to know exactly when the cable guy is going to show up at our house, so we can arrange to be home during the 30 minutes that he will actualy be there, some things just don't happen like clockwork. Some might say that the cable guy should be able to look at his list of visits for the day and be able to tell you, "I'll be there at 2:35pm." However, in a world where the reason the cable guy is visiting is well known, but the cause of the trouble could be any of a hundred things, it's much harder to predict precisely how long each visit will take, what traffic conditions will be like, and what will actually be required to repair each of the issues he will work on before he comes to your house. Due to many factors that are simply outside the control of the cable guy, he just can't give you a down-to-the-minute estimate of when he will be in a particular place.

Many problems are more complex than we realize. Ever been late getting somewhere, even though you left with plenty of time? Was there traffic along the way? Did you get lost? Was there construction happening along your route that you didn't know about ahead of time? There are tons of factors to consider in many "predict the future" problems, many of which simply aren't within our control. Construction is an especially good example, because not only is it unpredictable how, exactly, it will alter your travel time, the construction process itself includes tons of little details, people, schedules, etc. that are themselves unpredictable. It's much easier to say, "Will this road be finished this year?" then it is to say, "Will the concrete for the sidewalks arrive at 3:05pm as scheduled?"

There is randomness in the universe. Some of the processes in the universe act on randomness. Flipping a coin is a perfect, but simple, example. If you flip a coin one hundred times, you're almost certain the get fifty "heads" and fifty "tails". That part of it is predictable. What isn't predictable, however, is predicting what the next coin flip will be; heads or tails? You might get it right 50% of the time, but that doesn't really do you any good, does it? Events that happen with true randomness can only be analyzed to give a person the odds that a particular outcome will happen next, but not what will actually happen next. It's what makes betting work in Las Vegas.