Rescuing Encrypted files on ACD

So Amazon is shutting out Linux users.  But what if I have a bunch of encrypted files there using old encfs and acd_cli scripts?

I can copy down the encrypted files using their client at any point, but how will I know which one is which?

I did the following.  First, create a temporary directory.  I did this in my $HOME on my Mac.  Find a way that still exists to mount the drive (I used ExpanDrive).  Once that is prepared, change to the mounted and encrypted ACD directory and run this command:

Let it run for a while, it may take several minutes.  This will create in $HOME/temp the identical directory structure as on the remote drive, and the identical filenames – but they will all be zero bytes!  What good is this?

Thanks to the consistency of encfs, you can mount and decrypt this skeleton directory like this:

Now, use some other tricks to find the matching filenames and you can manually download the specific encrypted files you want.

Amazon Drive Shutouts

Amazon is in the process of revoking API keys for a number of apps used to access their Cloud Drive product.  They started with acd_cli and have now banned rclone, and apparently Stablebit CloudDrive.  As more users of these products filter into fewer and fewer available options, the API hit from those products will certainly increase, possibly leading to more bans.

Why?

Amazon is theoretically overselling cloud drive capacity.  They call their service “Unlimited” but are counting on the fact that no sane person will upload more than a few hundred gigabytes of data.  Using their client, yes, that would be true.  (To be sure, their client has improved considerably in the last 6 months or so).  The above clients, though, allow you to mount your remote drive like any regular network-attached drive, and work directly off it.  With that ability, it is worth putting terabytes of files on a cloud drive and freeing up local resources.  Some files, like video files, are ideal for this application.

Some very sane people started storing 5, 10, 50 and more terabytes on their service.  Can you imagine the accountants panicking?

But I said “theoretically” overselling.  Is that true?  Are these cloud drives really losing money by allowing large amounts of storage?  Let’s calculate the cost of storage, in a grossly oversimplified calculation.

I looked up the price of a 2 TB hard drive this morning.  It was on sale for $89.  If I wanted 2 TB of cloud storage for one year, at that price it should cost ($89/12) $7.42 per month.  Do you know any cloud services that offer that price?   Amazon’s Cloud Drive is $60/year for “unlimited” space.  Based on the price of this drive (and I know this bears almost no parallel with reality), they are expecting to split the purchase cost of this drive between ($89/5) 17.8 people, who each store a maximum of (2000/17.8) 112.3 GB each.  Would you pay $5 a month to store only 112 GB of data?  Me neither.  Amazon naturally doesn’t expect that either, so it likely means their cost is far, far cheaper than that.

Where’s the “value-line” for you at that $5/month price though?  500GB?  1TB?  2TB?  I’m guessing it’s somewhere around the 1TB mark, with options for a little more.  (Incidentally, that’s kinda the price of Office365, including 1TB of storage and, oh yes, a full office suite thrown in for “free”).  Would you pay, oh, $7.42 a month for that capacity?  Wait, that is totally coincidentally the price of a 2 TB hard drive!  And you get to keep it after a year, get a second one for replacement or expansion, every. single. year.

Their business is NOT based around cheap, small hard drives at retail prices.  They run a staggeringly huge data warehouse with storage pools.  Their pricing is a fraction of what ours are, allowing them to pay for the rest of the overhead with attaching that drive to the Internet.  They can likely afford a few hundred outliers.

So it’s not just about storage space.  It’s about reliability, availability, and convenience.  Kinda those things you lose when you shut out third party apps eh, Amazon?

So if I can’t effectively store my video backups up there, and if they close off more of their API so I can’t use Arq for backups… my practical use for ACD drops way, way down below my “value-line”.  It may be time to migrate.

Managing Encrypted files on Amazon Cloud Drive

I have implemented a file system on Amazon Cloud Drive for a lot of media with the great acd_cli.  To protect my privacy, I have run this through an encryption layer encfs.  My writeup will follow.

A problem I was trying to solve in my mind though, is how to manage – rename and delete files once they’re all scrambled up and I can’t discover even the path and filenames.

Ultimately this would be seamless.  Delete a local file stub and it traces back to the encrypted remote file, but it doesn’t quite work that way.  I discovered how to do this on my Linux host.

Once I realized that the filesystem for encfs has the same inode numbers for the encrypted and decrypted files, I had a clue.  First, let’s find out what that file number is:

149 is the part we want.  inode numbers are unique per partition/filesystem, and seems to persist between the encfs pairs.  Now, to find a file in the encrypted path system with inode 149… find to the rescue!

I won’t even try to copy/obfuscate the number above.  Try it if you want to see it.  It would be almost impossible to track that file without the number.  Size and date are much harder to nail down the exact file.

So, to stitch these two together first you want the inode number only:

Now this is something we can use in a delicious Linux command chain.

This is easy enough to make into a little bash script, and allow passing arguments and quoting to protect against embedded spaces, as well as including the explicit Amazon Cloud Drive working area:

Works great for specific files, not so much for directories.  You would have to change the ls command to use a -ldi parameter just for those cases.

Now that we have the filename, we can manually delete that filename on Amazon, either through the web interface or using acd_cli’s command line trash argument.

Clone a Clone

So I had yet another WD MyBook die on me a couple of days ago.  And I still went out and bought another one (what was Einstein’s definition of insanity, again?)… This one was only two years old but these things are still quite cheap and very convenient to get.  Maybe someday when I have more money I will get a proper NAS enclosure.  For now, my pattern is to buy a new one every year.  They’ve almost doubled in size every time, so I can just clone everything to the new one and go from there.

Since I had such great experience with my WD MyBook Live, I decided to get the next version, a MyCloud.  This is slightly improved, similar in appearance, also with a GB network port but this one also has a USB 3.0 host on it, so I can buy another, regular drive next and slave it off of this one.

My previous scheme didn’t seem to work well with this one, though.  I was unable to make an unattended rsync to push from that drive to this one because these drives are set up to use root as the ssh user, and it’s not set up to use PermitRootLogin without-password .  It always seems to prompt for the password, which won’t work when AFK.

Until I figure that one out, I looked for another solution.  The coolest discovery was that these drives are actually running Debian.  After some research I found out that lftp will mirror a remote directory over FTP.  Of course, lftp is not installed on these drives, however, after running…

I had them installed (on both the 2012 and 2015 vintage drives).

Next was the task of setting them up!  I found a good post on StackExchange (well, ServerFault) here that helped a lot.  I ended up using this:

With that I had some options I could use by uncommenting DELETE or TESTMODE, for example.  One additional gotcha is that it doesn’t retain the source’s ownership, but since this is such a basic setup, I just chown all the files in the LCD variable to my username.

The password for that user is in cleartext, in the file.  If you are not comfortable with that, do not use this.

It still doesn’t seem to be running in cron yet, but I need time to experiment some more.  I still much prefer the SSH method but I want to get it working reliably and repeatably.  I need to reimplement much of this each time the firmware gets updated, and copying a few files is much better than having to edit sshd config files each time.

WD MyBook Live

I discovered the other day that my WD MyBook Live is a lot more capable than I realized. It is actually running some flavour of Debian and has a fair suite of default unix commands.

So what did I do with it? I didn’t go too wild… Due to the death of a previous MyBook (capacitor problems on the interface board, I think), I decided I wanted some mirroring capability on it with another drive attached to my server Linux machine. Fortunately, on the Live, I found rsync, ssh and cron, which seems like the power trio I needed.

First step, enable SSH. That was too easy, go to http://address/UI/ssh and check a box. Done! The instructions for logging in are there.

Next, log in by ssh and create a ssh key pair… Something like

Use no password on this one, and store the keys in /root/.ssh – it seemed reasonable enough (do I need to tell you that you need to guard this key carefully, as it leaves the door wide open to your server?). Next, copy the public key over to the other machine…

And on the server

Test it out on the MyBook again…

Boom. In.

Next, test out rsyncing. I found out that the directories created through the GUI and through file sharing are on /DataVolume/shares, so…

It should pull in the key and do a dry run of the sync. If it works, try without the –dry-run switch and run the real sync. This will take some time depending on the amount to sync.

The switches are -e to execute ssh, -a to sync recursively and preserve permissions and symlinks, -v to be verbose, and -z to use compression. You can remove the -v portion before putting it in cron.

Speaking of which put the above successful command line into a shell script and copy it into /etc/cron.daily. Don’t forget to make it executable.

Very cool! The Live series of drives is now called the MyCloud, and is more powerful yet, including a stronger CPU and a USB host. It’s probably worth having at least one of these devices on a local network for part of a comprehensive backup strategy.

More hack attempts

After my last experience, I checked my logs and noticed quite a load of failed attempts on my mail server.  It looks like a brute force script kiddie attack, which I’m pretty sure will fail on my machine.

Still, I want to kick out these morons.  So after some research, I found fail2ban.  The installation was simple enough, and with a little bit of configuration (in jail.local, not jail.conf!) I had it up and running… but the attacks continued.

I wrote a simple perl one-liner to parse out all of the failed login attempts, run them through sort and uniq to get the repeat offenders (twice is enough, kids) and append that to the hosts.deny.  That worked, but not ideal.  I’d rather have iptables-level blocking (using DROP instead of REJECT to waste as much of their time as possible).  But fail2ban wasn’t catching them for some reason.

I set up a secondary rule and it still failed – until I discovered fail2ban-regex! With that command you can test your rules at any time instead of waiting for the next attempt to come in.  Great!  It turns out the regex wasn’t quite right for the messages I was getting.  I simplified the regex until it caught the failures.  But it still wasn’t working live.  Grr.

fail2ban works on log files.  It scans for repeated attempts to determine if there’s an attack going on.  This would work great unless the logging daemon compresses the messages with something like “last message repeated x times”.  And this happens a lot, especially when under attack and you actually need it!  You can not turn this feature off with sysklogd.  The last key was to replace sysklogd with syslog-ng and POW, the banstick came out to play.

Debugging wasn’t very easy, because the failures are silent.  Until I found fail2ban-regex I had about 4-8 hours between tweaks to the regex to see if it worked.

At least now I have a self-setting ban trap that uses iptables-level blocking.

If you’re reading this and you’re learning to be a script kiddie,  you are learning to be a loser.  You are creating nothing of value.  You could vanish from the Internet and not only would it become a better place, but the situation would improve.  Is that really what you want?  Instead, why not keep on learning about security but do this the right way, on your own machine or a VM and learn to strengthen the Internet, not ruin it.  You might actually be actually appreciated and valued by others on the net.

Hack Attack

Someone mentioned they got a bounce from my domain’s email. I went to take a look at the error and discovered a couple of hosts trying to brute force login to my SMTP server. Some quick config changes to create a blacklist, and a fail2ban install and it has stopped now.

Lesson 1: check your logs often
Lesson 2: use SASL
Lesson 3: use complex and random passwords
Lesson 4: install and configure fail2ban or blacklist the bozos with iptables or hosts.deny or something.

I got most of these right the first try, especially the middle two.

Eternal vigilance, they say…

Canada domain name registry scam

Domain Scam
Shred this letter immediately.

I have been frustrated with this  Domain Scam for a long time now.

I have a few domain names, and I happen to be in Canada.  There is a company called “Domain Registry of Canada” that mails out official-looking envelopes (it looks like a government-issue brown windowed envelope) to everyone that has WHOIS information indicating they live in Canada.  This is an example of the letter they send.

Unless you read it quite carefully, and know what is going on, you might think you need to pay their (very expensive) domain registration fees in order to avoid losing your domain name.

This is NOT TRUE.  Consider that many people, like myself, purchase hosting from a company like 1 & 1.  Part of the package includes free domain registration for one domain.  There is very little technical know-how required to get this going.  In fact, it could be that some hotshot young web developer has set this up for you.

You need to do nothing except keep paying your web hosting amount to safely retain your domain name.  This letter conveniently omits this fact.  They do make this somewhat clear in ALL CAPS halfway through the letter, but only after the scare tactics a couple of  paragraphs above.

Is it strictly a scam?  No, I guess not, they do provide a service, and they spell out everything in this letter, but it’s very dirty.

To make this abundantly clear: There is NEVER any reason to do anything except shred this letter. 

For more information, feel free to Google “Domain Registry of Canada” and look at any link that is not their official web page (i.e. start at the second link).  Here’s a link to make it even easier.  You will find many other bloggers, most more capable than myself, that explain this quite well.

Great VPS Experience

I have been using shared hosting from 1 & 1 for a few years now. No big complaints (except their domain management page is terrible) but I needed a bit more flexibility. I suppose shared hosting with its inherent limitations helps you develop a certain way, but I wanted a bit more.

For one, I wanted a much more flexible IMAP server setup. I was getting something like 100 mailboxes at 2GB each, but what if I wanted to totally move off of gmail and use 5GB? Setting up one or two archive accounts seemed… Messy.

So I did some research and found lowendbox.com. It is a bit intimidating at first because you don’t know what to look up, or what questions to ask, like “what is OpenVZ, and does it matter for what I want to do?” Or “what happens when I lock myself out?” (Notice, that’s when, not if). “What about if I want to wipe it out and start fresh?”. “Do I need one of these control panel products?”. I have those answers now, if you want. 😀

Well, I didn’t have any very clear answers to any of these but I decided to try it out, with a great special from LEB for $56/year for a 2GB machine from ServerMania.

This. Stuff. Is. Cool.

If you have never done any admin work on Linux, set up your own server or anything, stay away. You will get little value out of it and very likely get hacked. But if you are confident with your admin and security chops, this is a crazy bargain. I chose Debian 6 because Debian. I have a long history with that.

I spent a good day figuring out, testing and setting up the firewall, and learned an amazing amount. I tightened up SSH, installed some IDS stuff and I was happy, even though the thing was hardly useful yet!

I used git during the setup so I would have some ability to track and undo dumb changes. I should have started this earlier, I know it!

First big task was IMAP and SMTP. I got that rolling and made sure that spammers couldn’t hitch a free ride. I got a free SSL certificate from StartSSL and it does the job. Test like crazy.

Next I got Apache rolling and a database server, and made sure those aren’t vulnerable either.

Finally, once I was happy, I pointed my domain to freedns.ws and configured its records. Let me stop for a second and state that of all the stuff I have done so far, FreeDNS.ws amazes me the most. I can’t see why anyone would pay for DNS server services when this thing is around (ok I notice certain record types are missing. I didn’t really need them though).

While waiting for the DNS switchover to happen (always an interminable wait) I discovered mydnscheck.com. A great tool to see how things are progressing and what you have set up or have forgotten to set up!

So far that’s it. I haven’t cancelled my shared hosting yet but I’m moving everything off it. I think I’ve outgrown it… Or more likely it just doesn’t suit me, as I never really used it to it’s potential.

Now I might even run a personal Minecraft server on my VPS, not sure yet. But hey, I can.

It was a lot of work, but a lot of fun. And I am happy in knowing there’s a little piece of the Net that I put together all by myself.

Raspberry Pi – gone WiFi

I was using an older WRT54G as a client/bridge for my Raspberry Pi to my home network.  It was working OK, except it would skip and stutter when playing back MythTV streams. I didn’t think a USB dongle would prevent that, but it would be nice to replace the complexity of the router with a tiny little wifi stub.

I found the AirLink 101 Wireless 150 USB “thing”  (Model AWLL5099).  I can’t call it a “stick” because it’s not stick-like (or “sticky” as Chris Wall would have said).  It’s a teeny little stub, about 1/4″ longer than the USB plug…

AirLink101

But boy, does it work.  It’s single-band and 2.4GHz only, so it’s not as good as it could be, but for $16…!

With OpenElec, you can do all of the configuration from the OpenElec utility.  Boom, it works, just choose wlan0 as the network interface and enter your security info.

With Raspbian, it was a little more complex.  With the 08-19-12 installer, I needed to install a few extra packages:

Read through this link to learn some things about how to setup.

I was getting an error from wpa_supplicant that I couldn’t decipher.  I finally found out what the problem was by running:

I had another Raspbian setup, but I forget how exactly I set it up, or what version installer I used… That one had all of the necessary packages installed and all I needed to do was enter the wpa_supplicant lines…  Pretty slick.