up_the_irons: is there still room on kvm14 to get my package upgraded at some point? I know, I should have just gone bigger when you said "take your time to decide"... I don't need it done any time soon, but yeah, I'm having regrets :P whitefang: can't say right now. all these VMs are going onto kvr14, it will probably fill up the box time to order more hardware.. yikes well, it would be +256MB RAM and +20GB disk space. I'm not thinking major upgrade, as much as I'd like to throw down the money on 4GB of RAM all to me. you could just shave a few MB here and there off of everyone moving over to the machine and allocate it to me...I'd be OK with that. time to bounce for a bit, good luck with the damage contol up_the_irons. whitefang: thanks! up_the_irons: Are the servers branded or built from parts? DDevine: i build them, they are supermicro Ah no next day parts :S well, i *could* have got another raid controller fedex overnight, but i figure what is the use, the data is already corrupted Whole new box? How long has that one been in service? DDevine: he said earlier, first one G: Sorry, I have been skipping around many channels and doing lots of little things so I miss a lot of the finer points. DDevine: mercury has been in service since 01-17-2009 don't you mean "had been"? sorry, too soon? whitefang: oh man kick me while i'm down why don't ya ;) Also, just got an interesting email from Amazon. They now do 5TB files and the Multipart upload can do some interesting stuff. http://fpaste.org/qM53/ You can stream the file into S3 as it is created. yeah i got that too I expect tarsnap sales to go up. oh RIP mercury After resolving to actually think more logically about my problems and to be a bit more attentive to detail -- I got my new mail setup working. The whole experience has been fairly enlightening. I hate mail systems, but I conquered this one. im definitely setting up some sort of cronned rsync solution from this i can't wait to setup my file server...wait whats stopping me? i have the free time....i dont know what distro i want to use freebsd! FreeBSD. 8.1! zfs! i already have freebsd computers out the yingyang ;p i want to use linux this time ok maybe debian. meh what im particularly interested in setting up is a lvm with 8 hard disks I personally use FreeBSD with ZFS but I have a friend with Linux with BTRFS zfs can do single partition across multiple disks? yea and has some really neat features like filesystem compresssion oh cool might have to check it out then. jbod well they are mostly ide drives on different controlelrs whitefang: RAID is for business continuity in the event of a failure. backup is for data recovery in the event of a catastrophic event. a *lot* of people make the mistake of thinking RAID = backup, when it's actually not. (it seems like you fall into this category, hence my mentioning this after the fact) RAID is for some failures, oviously it is not foolproof ;-) Silly question. I have the (old) 768 MB / 20 GB / 200 GB for $20. I see a 768 / 20 GB disk / 400 GB for $20 now. Can I get that extra 200 GB? :) or new customers only? lucky: you can upgrade to the new bandwidth tier for a 1 time fee of $20, if interested, submit a support ticket with your UUID, and permission to charge the $20 lucky: it's $20 per VPS, just to be sure I'm clear. Does anybody know what to do, when freebsd-update tells me that it can't identify running kernel when I try upgrading from 7.2 to 8.1? jpalmer: what about upgrading RAM? nerdd: do you have a custom kernel? jpalmer, alright. thanks :) fink: not sure the cost of RAM upgrades. I should probably find out. Nope, it's generic, and it's a clean setup (my vps was on mercury before the crash) jpalmer: i think i want that in hte next few months mike-burns: LOL, I should make a deal with tarsnap we all want more ramz! I was just looking back at tarsnap, and it looks pretty good would you guys recommend them? Yup. mike-burns: how easy is it to do a restore? It has the same syntax as tar, approximately, so it's about that easy. So, kinda easy. I've never tried with tar, but can you extract one particular file with tarsnap archives? Yes. I think. To be honest I've only played with it and hope to never need to do a restore. nice, looks like its possible; the general usage page shows them restoring two users' home directories someone should make something more user friendly than venti from plan9port. there's a redundant block storage shrink-o-matic backup mechanism. it's achilles heel: no data ever gets deleted that is archived. *sigh*. epitome was to do that, but it's klunky, epitome2 was never released to the public nor finished, bleh. backing up whole lvm partitions seems rather non-guaranteed in the consistency department. if there is serious desire for backups at arpnetworks it might be best to follow the (very good and useful) model at other hosting companies: provide a server (with no bandwidth penalty) to dump backups on for the vps customers .. that's an interesting concept maybe there should be some form of quota (lvm per vps for backups) .. or maybe even an iSCSI initiator .. s/initiator/target/ toddf: i had issues with that and access control. in order to save owner/group/perms, you need root access on the backup server. so if everyone has root, what is to keep someone from grabbing other customers' data? up_the_irons: chroot ftp/sftp access? and why root on the backup server? that sounds like a poorly designed backup app in the security department toddf: if no root on backup server, everything is saved as the same user UID's can't change so a restore is then useless might be ok for /home/foo, but not for like the root fs are you talking rsync or what? I'm talking 'vps people create archive of their own desired flavor, I'd do gzip'ed dumps, randalshwartz would do zfs send/dumps, etc ..' toddf: rsync / tar toddf: so each customer would get a 2nd vps for backup purposes? tar = no problem, rsync .. I'm not the type that prefers that, then again my parents systems are rsnapshot'ed to my brother's systems *shrug* up_the_irons: disk storage is all I'm theorizing about how to access it, is the question. do you setup 'backup.cust.arpnetworks.com' or do you setup 'backup-iscsitarget.cust.arpnetworks.com' or 'backup-ftp.cust.arpnetworks.com' or something similar? toddf: ah, so rsnapshot keeps metadata about the owner/group/perms and then restores them? aka make the concept of backups a little more abstract than 'use rsync' and obviously tar keeps that data I'm pretty sure rsnapshot is not going to do what you want I only mentioned it because I'm not a 'rsync for backups' fan, I'm a 'rsync for moving my pics of my kid around' fan .. yet my parents get their systems backed up by my brother via rsnapshot which is essentially rsync + subdirs + hardlinks everybody has their idea of backups of course roger mine centers around 'system needing backed up shovels data via some protocol in some format to some method of archive server for later retrieval' .. typically tar and dump + gzip/lzma/bz2 based .. I check everything into git and backup my git repos. now, being someone who had to re-install his vps this morning, I have to wonder if it'll effect your business model if you start offering an archive server for vps customers backups .. if I had to choose between $10/mo and no backups and $12/mo and backups, I'll do my own backups .. since my vps is primarily 'regurgitate info from other servers' (dns slave, mirrored http files, etc) .. the backup in my case is in my head. I'd pick a backup plan. obviously other users have other usage cases. mike-burns: you'd consider `backup' a thing to add to your monthly bill? (like insurance, I guess?) i check things into git and don't backup my repos ;) given that git is distributed, there's always some computer that has it. i have 2 laptops and a gitosis server, so 3 copies right there already I'd pay extra for someone else to handle backups in a completely transparent way. Ah, I delete clones of repos constantly. ah mike-burns: the "completely transparent" way is where I have trouble. I haven't found a cost effective solution. there are expensive iscsi master/slave solutions, but they are.. expensive ;) mike-burns: with an operator account to your vps with accessto read your disks (aka do a dump) or backup the lvm on the host system directly? Yeah, the "completely transparent" thing is the real issue here, and probably why it won't happen. I mean that I don't want to notice that my system is being backed up (until I need it). I could tolerate an operator account, but anything more than that risks cutting into disk space. iscsi can be inexpensive if you're not searching for `hardware' and `warranties' etc .. http://pastebin.ca/2015847 toddf: backup the lvm on the host directly is pretty taxing on the host, and if the majority of customers select backups ,then performance will degrade fast up_the_irons: definately... toddf: wow, what are all those volumes? :) about the only way to do lvm backups is if you had some snazzy iscsi target that did snapshots of the blocks and then permitted a 2nd host to backup the snapshots, even then filesystem consistency would be an issue. I'd expect things to be more efficient and more useful to people if the native os tools in their vps were used to create backups from consistent data states and use that to restore with in a recovery effort. the server name gives a big hint under my company I have 2 afs cells, one internal only (fries.net) and one publically accessable (freedaemon.com) since most of the data is infrequently accessed, and the local disk io hits 6mbit/s on the iscsi volumes that's pleanty fast enough when I'm normally not accessing the fileserver locally (1.5mbit/s max outbound at the office) i c (and yes I'm using claudio@'s iscsid under openbsd for the initiator and netbsd's iscsi-target for the server also running on OpenBSD) he had a good laugh at me when I said 'yes, it works with native v6 over ipsec over wifi from home to the office' lol it's not meant for production, but it works well enough to put my data on it knowing that if any number of things go bump I'm rebooting afs4 and fsck'ing for 360s at least .. openbsd source trees and other re-downloadable stuff seems ok to put there what about a SAN mounted to each VPS (for an additional cost) all backups are done by user in their preferred style (rsync, dump/restore, tar, etc) and just saved to the SAN mount. TBH, the issue with all of these is that they're on-site. mike-burns: IMO, if a client wants offsite, they need to appropriate those resources themselves. Sure, I can understand that. up_the_irons: another vps I used a couple years ago, every vps had the option of installing a bacula agent, and the VPS provider ran a centralized backup bacula server. it sounded like a lot of overhead on the providers part.. so it may not be suitable here. providing an iscsi target to be mounted in a VPS, and letting the client handle his own backups sounds a lot more.. maintainable. maybe I should do more than just .. add it to my todo list .. netbsd's iscsi-target doesn't have config reload capabilities, you restart it to reload the config, and disconnect all clients in the process .. brutal! ;-) yay, I just thought, the only good thing to come about mercury dying is that i can finally get rid of the LSI MegaRAID controller; i HATE its CLI. Gonna pop in a 3ware 9650-4LPML and be back in action eh 3ware's cli sucks too. and their current firmware for that controller requires a newer version of the driver than is in a lot of current distro versions. ... which sucks, since you then have to remember to update your initrd with their driver copy or leave your machine unbootable :P jdoe: i haven't found this to be the case, I always put on the latest firmware and it works no problem jdoe: and the CLI is way better than LSI's. It's *simple* up_the_irons: I'm talking reasonably old at this point, ie centos i c are we talking about tw_cli? yeah I dunno about "simple" exactly, I always get lost trying to figure out which sub-section I want to be in ./tw_cli /c2 show /c2/u0 show all /c0/p0 /c0/u... etc yeah /c2/u0 show alarms easy the, uh, sadder thing is that I haven't figured out how to make the auto-scrub/disk check work properly. I give it a day to run on, but what happens it just runs *continuously* all day, looping. rather than one-shot. don't suppose you know what I'm doing wrong? :P not sure, i don't run the auto-verify doh. hm.. I think I'm going to use duplicity + amazon S3 for backups seems like a pretty solid choice isn't 3ware an LSI company? they are now, yeah. he's right though, 3ware-branded gear sucks less to deal with than LSI-branded gear :P