A post about backing up your data.

Posted on October 29th, 2013 in security | No Comments »

Death of my Macbook Air

A few weeks ago my Macbook air died. It’s SSD drive went out (and coincidentally Apple issued a notice that they are replacing SSD’s on all new Macbook Air’s a few weeks later.) The main reason for this post is to tell you to BACK UP YOUR DATA. Luckily, I had all my important data backed up in Dropbox and larger files that I don’t want replicated to all machines to Amazon S3/Glacier via the amazing Arq software.

A warning on local backups

To save money, one might find it most cost effective to plunker down $99 for a 1TB USB hard drive and just run Time Machine (on Mac) or Carbon Copy Cloner but really that’s not good enough as an only backup. A fire in your home or theft would wipe out all your precious files and memories.

Dropbox to save the day?

Above, I mentioned I used both Dropbox and Arq. Dropbox is just convenient, drag files to the DB folder—it’s backed up. One might even tell me that now you can store files on Dropbox and choose not to replicate them across all machines (an annoyance in the past) but still, at $40+ a month for 500GB of storage, that’s ridiculously expensive — almost 500 bucks a year!) If you were to put that data on Amazon S3 it’d cost you roughly $8.34 a month.

Amazon S3 and Arq

Amazon S3 is big, fast, and cheap. The problem with Amazon S3 though is that it’s not user friendly. It’s really for developers writing apps to store large amount of data—but Amazon has no restriction on use cases so it’s perfectly fine to use for personal data storage.

I really dig Arq because its a one time fee of $40 and then after that you simply connect it up to Amazon S3 or Glacier and pay for only what you use, forever. Amazon however charges per download and upload of your files and every time you access the files (but we’re talking fractions of a penny, so though it probably does end up being a lot cheaper we really can’t compare the two products.) Here’s Amazon’s S3 pricing chart. The other two great things about Arq is that it can do progressive backups, meaning it only backs up changes over time, reducing your costs and time to upload, and Arq also encrypts your data before placing it on Amazon so even if someone were to break into Amazon’s servers or illegally access your account there, your data is still encrypted from the outside and only Arc (with your password) can decrypt it.

Enter Bitcasa

Saying the above, Arq is probably still too complicated for the average computer user. Simply signing up for Amazon S3 alone and generating an access key is above most people’s heads. I’m sure there’s a lot of alternatives out there but I’m looking for one I can recommend to friends and family. I discovered Bitcasa, a hot local startup with big names behind it offering unlimited storage. I honestly haven’t used it enough to recommend it—nor is the company old enough to gain my trust yet, who knows if it will flounder. But U did ask their support team some questions:

Davita: Hello! Do you have any questions I can help answer for you?
You: hi
i don’t get how this can be infinite
what if I have 100TB to transfer
also do i have immediate access to download it
Davita: Hi there :) How are you today?
I get it, you’re thinking “what’s the catch”
no catch
You: this is not financially viable as a company
Davita: we’re able to do this because of our deduplication process
You: oh ok
do you throttle upload after a certain amount?
Davita: Nope, we do not throttle
You: and where do you store this data?
on amazon s3 or an internal data center?
Davita: Amazon s3
You: what about Amazon Glacier
do you use that or no?
Davita: Currently, no
Awesome
You: If i upload 1TB to S3 that’s the equiv of paying Amazon $100 a year. So bitcasa is betting that my content will be similar to others so they can make somewhat of a profit
by reducing the upload size on s3
Davita: That’s one way to think about it :)
You: Cool. Thanks for the info!

If this company does go under, they will certianly give you enough time (weeks) to download all your data before hand. If someone actually wants to give it a try let me know how it is.

Conclusion

Start with Dropbox for your small stuff which is a no-brainer (free 2GB on signup) and then find a way to backup your larger data offsite. Try Arq (feel free to comment if you need help with it) or follow other suggestions in the comments.

Improving prototypal inheritance in JavaScript with a surrogate class.

Posted on October 29th, 2013 in JavaScript | No Comments »

A friend of mine encountered this page by Michael Bolin, author of Closure: The Definitive Guide. It’s a very great writeup that distills a lot of JavaScript edge-cases and “gotchas” for a person just starting to get advanced in the language. I would have appreciated something like that back in the day and like most just learned it slowly through experience and bug-bashing over the years.

Though the page did cover prototypal inheritance well and it’s powerful ability to do pretty much everything that regular inheritance models of languages like Java can do, one small bit that the page didn’t cover was trying to call super on a parent class. The issue is that there is no built-in “init” method on a JavaScript constructor. The simple act of initializing a “class” in JavaScript both creates the object and calls its setup routine inside the class. This might be perfectly fine in most people’s day-to-day Javascript but if you ever need your parents class to do some additional setup work for example in the case of a factory pattern, you need this support. JavaScript frameworks like Backbone.js and jQuery invented their own ways to solve this issue but it’s important to know the minimum amount of effort to solve this problem without relying on a 3rd party framework.

The answer is to create an empty “surrogate” class to hold the parents prototype. That way when it is initialized, you have the chance to run the contents of constructor when you’re good and ready:

function Mammal(obj) {
    this.name = obj.name || 'Anonymous Mammal';

    // Basic conditional code you normally couldn't do.
    if(obj.pouch){
        this.category = 'marsupial';
    }
};

Mammal.prototype.legs = 4;

var lion = new Mammal({ name: 'Lion' });
var tiger = new Mammal({ name: 'Tiger' });

// This middle constructor is required because we can't instantiate `Mammal` directly as `Marsupial`'s prototype
// or else it would run through its "initialize" stuff too early.
function Surrogate(){};
Surrogate.prototype = Mammal.prototype;

function Marsupial(obj) {
    Mammal.apply(this, arguments);
};

Marsupial.prototype = new Surrogate();
Marsupial.prototype.pouch = true;
Marsupial.prototype.legs = 2;

var kangaroo = new Marsupial({name: 'kangy'});

The above example is highly contrived but you can see that you can actually push off a bunch of responsibility from the child class to its parent. Which means less setup work for each child and the ability for the parent class to decide the “DNA” of the child based on the parameters passed in. For example if this was a vehicle parent class, mixin the diesel engine class or if the child is a truck or hybrid engine if the child is an electric car.

Preparing for iOS7 by doing a full iOS backup to an attached drive

Posted on September 24th, 2013 in iOS, iPhone | No Comments »

I rarely do full backups of my iPhone because I don’t usually have 64GB free on my drive but in preparation for iOS7 I knew I needed to do it.

Unfortunately there is no way in iTunes to specify an alternative drive to backup. Luckily I found this post stating you can symlink another folder on a remote drive in the place of your regular backup folder. The OS knows no difference:

 # You may need to move or delete your old Backup folder first.
 ln -s /Volumes/drivename/iBackup/ ~/Library/Application\ Support/MobileSync/Backup

Rate limiting your WordPress login from wannabe hackers

Posted on May 1st, 2013 in nginx | No Comments »

As your blog get’s popular you get a lot of people trying to hack it. Especially if it’s on Amazon Cloud. If you’re running WordPress and not already running Nginx as a reverse proxy, you should. It makes it hella fast and a lot more scalable, especially with Nginx Proxy Cache Integrator. With it, a small Amazon EC2 instance can withstand Techcrunch and Mashable hits—I know because we do it all the time on our corporate blog.

Security-wise, you can move your SSH port, rely on key-based login only, etc. but nothing prevents script kiddies from running a brute-force dictionary attack on your WordPress login page. Even if the attempt is fruitless, it can create unnecessarily load. Rate limit just the login page with Nginx to solve the issue:

http {
   limit_req_zone  $binary_remote_addr  zone=one:10m   rate=5r/m;

   server {
       proxy_cache_valid 200 20m;
       listen       80;
       server_name  site.com www.site.com;

           location ~* wp\-login\.php {
               limit_req   zone=one  burst=1 nodelay;
               proxy_pass http://127.0.0.1:8080;
           }
   }
}

The above limits the user to one login request every 12 seconds and resets every 10 minutes. Note that this does not affect any other website calls. Be sure to use the nodelay flag to send a 503 “Service Temporarily Unavailable” response instead of just slowing down the user’s calls after the limit is reached.

AWS: Gaining SSH access to an EC2 instance you lost access to.

Posted on April 9th, 2013 in AWS | 2 Comments »

I had a situation where an employee created a new EC2 instance with his keypair and was out the next day. I needed access to it immediately, so this posed a problem. Here is how I gained SSH access via AWS web console: I detached the EBS drive, mounted it on another EC2 instance I did have access to, added my ssh pub key to ~ec2-user/.ssh/authorized_keys, then reattached it back to the old instance. Amazing the ideas that strike you in the event of an emergency.

As long as you have full AWS console access and some light unix chops, it should be fairly straightforward:

  1. Go to the Amazon EC2 control panel, and click “Volumes” (under Elastic Block Store). Look at the attachment information for the old EBS volume to find the EBS attached to the old EC2 instance.
  2. Detach it and attach it to an EC2 instance you have SSH access to via this web console. Keep note the path, probably /dev/sda1. You will have to reconnect it to this path later and AWS doesn’t always guess correctly. When you attach it to your other EC2 it will probably attach as /dev/sdf or something since /sda is taken by the root drive. You can see this in the EBS Volumes table under “Attachment information”. This will be something like (<instance name>):/dev/sdg
  3. Use SSH and connect to your good instance. Type
    mkdir /mnt/oldvolume and then sudo mount /dev/sdf /mnt/oldvolume (or whatever the path given in the attachment information panel was). Your files should now be available under /mnt/oldvolume.

  4. Add your pub ssh key to /home/ec2-user/authorized_keys.
  5. Unmount the volume with umount -d /dev/sdf and follow the above steps to reattach it back. You should be able to login to ec2-user now on your old box.