qbit: I refined my alignment technique again. :D haha nice Basically, you don't have to tap the burr at all. Starting with all the bolts loose, if you rotate the axle, it shouldn't bob, because the outer burr is free to slide around to accomodate the inner burr. Now if you tighten one bolt all the way, it will probably start bobbing. Rotate the axle and stop when it's reached its highest point. Now go back to the bolt you just tightened, and loosen it all the way, while simultaneously pressing down on the top of the axle. The axle should drop down as you loosen the bolt. makes sense Now tighten the bolt again. The axle won't move back up as you tighten the bolt. Now see if it still bobs. If it does, repeat. If it doesn't, then move on to the next bolt and perform the procedure. cool With practice, you should be able to tighten-loosen-tighten each bolt once. i will give it a shot sometime today I think it works better if you take out the springs. I put the small bolts back in though. Without the springs. to help with keeping the outerburr in place? No, tighening the three long bolts will keep it in place. It's to keep the little clear acrylic cylinder in place while you're aligning. i just mean when it's mostly in pieces right The first time I did the above procedure, after I finished, I saw that the cylinder had moved so far to the side that there was a gap at the bean loading opening. whoa heh I keep the little bolts loose, of course. I should make a video of this and send it to Doug. I wonder if anyone else has developed this same technique by now, though. i am a fan of over information :D gonna do it? Make a video? Maybe, I'd have to clean up. I think I'd actually need a cameraman though. THe shot would have to zoom in on the top of the axle to see the bob. kew Maybe I'll just describe it to Doug, and he can make a video of it with Barb. she is pretty pro at camerain brycec: can you rsync with amazon S3 RSS? last i used S3, it was a pain to use with filesystems, especially large files. i always had reliability issues (backups stalling in the middle and nothing telling me why). If S3 RRS works for you, I would say keep using it and there is no need to buy or use any local rsync-based backup storage. It's just that many people have requested local backup storage, so i'm trying to find the best way to provide it CaZe: free dns services drop your IP for inactivity? mercutio: if you only had 2 disks, why would you got with raidz or mirror? (i'm new to this, I've worked recently with ZFS mirrors, but not raidz yet) dang, lots of scrollback, i'll read the rest later... up_the_irons: The ones I've tried do. dyndnds, afraid. *dyndns what about HE? HE? hurricane electric Do they provide domains or just DNS hosting? I think just secondary for their ipv6 project I have them for stonehenge.com secondary HE provides connectivity but they are also getting people aware of ipv6 weird. ipv6.he.net is blocked by opendns for having viruses. :) RandalSchwartz: is tunnelbroker.coM? yeah that CaZe: HE supports dynamic IP's on domains they're hosting, dns.he.net. But you do have to bring your own domain for that to work. However, HE's free DNS is definitely my favourite. (And contrary to popular belief - there's nothing special about it, no strings attached, it's not ipv6 only, yadda yadda yadda) up_the_irons: Quite right, no rsync. But rsync isn't really the best for doing backups. Simplest perhaps, but it leaves thigns to be desired compared to more "big boy" backup solutions that provide manifests, incremental/staged (sure, there's rsnapshot), and that sort of thing. up_the_irons: FWIW I use duplicity+s3 and it works fantastically. Everything is stored encrypted so I'm not worrying about some rogue Amazon employee stealing my data through backups. It's got a slew of features to handle incremental, full-if-older-than, cleanup, verification, and yes manifests. And it just throws the data at S3 and it sticks :p I won't say S3 is perfect - if you've ever tried to read back data you just pushed, you understand the frustration. But like you said, up_the_irons, it's working for me and that's what I'll probably continue to use. brycec: that sounds pretty cool But as a customer, I thank you for keeping your eyes out for customer demand and working to provide us with that. Hopefully you'll get some sales bullet-points from this. (It's easy! it's simple! It's fast!) hahaha yeah did you look at tarsnap as well as duplicity up_the_irons: You'd raidz two disks to get capacity. It still does its parity thing, you can adjust whether some data gets duplicated more than others and all the other ZFS-isms, but you are still at risk of losing *some* data if a drive fails. (but hey, zfs will tell you what files you've lost ;)) raidz2 4 disks would be better than raidz + mirror though RandalSchwartz: I have... And I want to like and use it, but I can't justify their pricing in comparison. what pricing? I thought it was pretty much S3 passed through with compression and de-dup done client-side RandalSchwartz: S3 b/w is free to store data, S3 storage is $.09/GB/mo for RRS (and I think $.12/GB/mo for standard). Tarsnap is $.30/GB/mo, plus b/w Oh and duplicity does compression and some dedupe too (not positive to what degree off the top of my head) the original "2 data centers lost" version of S3? or the cheap version that might be part of the difference RandalSchwartz: cheap version is RRS - reduced redundancy yeah - so you're getting less service I'm paying for a lesser committment, yes thus I'd expect a pricing difference. ok - just working this out RandalSchwartz: but for the "standard" S3, Amazon's price is $.125/GB/mo and tarsnap is $.30/GB/mo I don't begrudge him his overhead or anything. Like I said, it seems like a great product, but it's not what I use because I don't want to spend that much. .. http://www.daemonology.net/blog/2008-12-14-how-tarsnap-uses-aws.html (I know THAT he uses S3, yes) actually - that's probably a recent price decrease, and he hasn't adjusted to follow suit he says $0.15/GB there plus bandwidth http://aws.amazon.com/s3/#pricing aaand I'm out ... http://www.zdnet.com/blog/diy-it/amazon-reduces-s3-prices-because-0-11-is-too-much-to-charge/400 yeah, reduced if you want an archival / backup product that doesn't charge for bandwidth, see cyphertite.com ;-) (disclaimer: I'm only a happy customer, not even getting perks for referrals.. yet..) ahh - they use a datacenter, and not S3 wondering how they got to $0.10/GB/mo and somehow, they are magically on CST even though the rest of the middle of the country is on CDT. :) <<== timezone nerd so for userscripts do i have to do something to enable them? :userscripts list it but it doesn't seem to be working i do have script whitelisting but "enable-scripts" is set to true lol - wrong channel ^^^ luakit for context RandalSchwartz: I'm confused how the guy in that article got to 0.11, unless he's using reduced redundancy... which seems odd for backups. lol qbit fail :D Also cyphertite is pretty nifty. I've played with it in the past, no problems with it... but it wasn't my style, a bit too complicated. jdoe: I wouldn't say there's anything wrong with using RRS for backups. It's still highly reliable, just not as highly available. Amazon's still got two copies of the data. ^ that's the prob i am having lately having multiple hosts is kidna a bitch Not to mention the chances of you losing your primary data the same time as Amazon losing your backup are pretty damn small. especially if you don't want host a to be able to see host b's backups qbit: heh I banged my head against the AWS/IAM wall for a few weeks trying to get that to work. For now, I just hope my clients don't look at the script and see my access keys. Which... I'm pretty safe on, but still worry about. rot13 them that's lame :P haha qbit: I think you mean ynzr :P I really should make rot13.com my homepage yeah you should Side-note: If this counts for anything, duplicity is what Ubuntu uses for their in-built backup solution. (Well, deja-dup which uses duplicity) brycec: 99.99% isn't highly reliable when it comes to backups. availability or durability? reliability = durability, availability = availability andol: RRS = 99.99% durable = 1/10000 loss in a year. (on average) RRS is fine if you want a ghetto CDN or something, probably not ideal for backups. jdoe: Aware of the implications, was just wondering what the 99.99% was refering to, without seeing that from a quick glance as at the backlogg. IMNSHOIANABBQ ah CaZe: i forgot to ask -did you use washers in your new method? up_the_irons: oh i meant go for an extra disk, but if that isn't an option then mirroring probably isn't an option either qbit: Nah. kew I just kind of held the nut over on one side. up_the_irons: late to the party but .. I note you inquired about rsync backup service. If you wanted us to all play, even if we don't have the budget to pay, perhaps offer limited accounts with limited disk space, and charge for extra space. I know I could probably backup important config files and the like in < 128mb or even < 1gb and you might even get some people who decide to pay once they reach the level of data backup that requires them to .. just 128mb disk space heh 0$ cd / 0$ sudo tar cf - etc | gzip -9 | wc -c 1487236 if I want just the config preserved, I can do it trivially. data of course is going to take more than 1gb so I will decide if I want to pay for data or just the configs .. though one could easily store that small archive anywhere, not just a free rsync account hmm # tar cf - etc | gzip -9 | wc -c 1615487 # tar cf - etc | xz -9c | wc -c 1005208 you could just email something that size and fwiw xz -9c is really slow and takes a bunch of mem too xz -7c is about the max I've found useful, unless you want to archive stuff in the bg on an archive server that has nothing better to do but recompress at higher compression ratios well for write once read many times it may be helpful gah this linux is pissing me off on my desktop it seems like gnome-terminal crashed, like surely that's something that should never crash either that or tar crashed in the midle of outputting a filename oh sweet! resizing fixed it i bet /etc is bigger on linux # tar -cf - etc | gzip -9c | wc -c 2862681 twice your size :) thats linux's problem not mine *grin* it's a desktop too so may have more things in /etc oh what du says X11 is 65mb big oh there's a core dump in /etc/X11 now it's normal size # tar -cf - etc | gzip -9c | wc -c 1588074 that's actually smaller than my openbsd vps and xz was way faster i don't think xz is necessarily too slow for real use it seems 1/4 of the speed of gzip xz -9 can be. yeh with bigger working set maybe i'll try -6 -6 is the same lets say you have streaming backups to a backup system using xz -9 will be the bottleneck. using xz -1 and then later when no backups are running maybe consider using xz -9 but *shrug* err -6 is slightly faster on arp the same on my desktop I've specifically chosen gzip as a 'not as compact but not holding up the backups' algorithm before but if you're going over the internet to an adsl modem xz -9c should go faster than adsl speeds shouldn't it? for less data xfer and/or less churn, xz -7 is usually where I go it is implementation specific, lots of factors, including cpu speed, and adsl plan oh actually i'm doing 500k/sec heh -3c with lzma is faster than modem speeds while being better compression than gzip this is only on single core it seems too "implementation specific" indeed ;-) well adsl is generally limited to 24 megabit maximum so if you get 3megabytes/sec it should hold true all the time but i don't really hear of people getting anym ore than 2.5 megabytes/sec with uploading on the other hand on adsl -9 would make sense :) people have been compalining on an online forum recently how hard it is upload data overseas one guy was considering mailing a usb stick (he wanted to upload 250gb) adsl needs to die. its not keeping up. unfortuantely for some it is the only option. thankfully I'm in Oklahoma City where COX has overbuilt their fiber and I got a decent speed inet connection. i have adsl at home it's the upload speed that's annoying i kind of wish there was an adsl 3 or something that always included annex-m as an option and did better dynamic bit loading cos adsl is going to be around another 10 years probably toddf: xz -1c is faster than gzip -9 on both vps, and desktop and gives higher compression ratio so maybe xz -1c is useful for fast compression now over gzip you cannot make a general rule you can simply build a speed chart of various algorithms vs typical compression ratios and decide where the sweet spot is for your given application also consider xz takes more mem than gzip by quite a bit if you have many xz running in parallel or a low mem system that may also factor into the equation ahh yeh cache misses etc once when I was nieve I set 'xz -9' and ran into swap on a system that was doing multiple backups in parallel not pretty running into swap is never pretty :) $ du -hs vps-backups/ 860M vps-backups/ that's all of my databases (mysqldump'd) and /var/www tar'd up (not compressed). i'm surprised it's that small. mercutio: going for an extra disk isn't an option cuz the blade server only has 2 disks. i'm fine with mirroring at this point. iscsi! heh toddf: yeah, thanks for the tip oh right, so you'd need to migrate to a diff server if you wanted to create more space FreeBSD question -- Do anyone of you guys have SSH set up as chroot on FreeBSD? CaZe: it has been on my mind to offer forward DNS services ( CaZe │ Those free dns services are a little annoying...) CaZe: just other things keep taking priority.. the way i set up my reverse dns manager, with the powerdns models i wrote, it shouldn't be *too* hard to implement Webhostbudd: not sure about freebsd, but toddf and mercutio finally got virtio disk / NIC working on OpenBSD (on the new server) up_the_irons: oh i was finding an issue with network latency when doing heavy disk i/o btw sometime after you said to benchmark more :) it's quite reproducable CaZe: qbit : what is it that you guys keep talking about, with the bols, nuts, alignment, etc... ?? :) basically sh -c 'sync && tar vxpzf /root/base52.tgz && sync' while leaving a ping running to it up_the_irons: nope.. shell users get put in a jail or i use vsftpd+ssl if they need ftp access. up_the_irons: coffee grinder: http://www.orphanespresso.com/OE-PHAROS-Hand-Coffee-Grinder_ep_636-1.html brycec: about raidz on two disks -- losing data, even if only some, isn't an option; just don't want that headache i have no idea if it's the disk or network driver causing it though, but vmstat shows hardly any interrupts when there's lag i just want to pop in a new disk and rebuild up_the_irons: you can't raidz with two disks, i thought you may be looking at expand later when you said starting with 2 disks but i see expand means san or hardware migration now :) mjp: roger qbit: roger qbit: you're really into tuning that thing... :) CaZe is mroe into it than I :D - he has theories even! mercutio: yeah i would just hardware migrate it. pop the two disks into a pretty much identical server, but with more disk slots qbit: LOL up_the_irons: Yeah I know it's unwanted, particularly in this case. I was just answering the question of why anyone would bother raidz two disks. earthquake toddf: are you using iscsi for production stuff? I looked into it a few weeks ago, but didn't come away with anything great. granted, i only looked at Linux solutions. What do the OpenBSD solutions look like? brycec: ah ok In Boston! gah, seagate 3 tb disk connecting at sata 1 speed first world problem yeah ;) i don't know if it's something worse err a symptom of something worse otherwise it doesn't relaly matter cos it's just for archiving stuff i'm a bit weary cos it's a forward replacement for rma oh, it's showing up as 2tb only too Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes up_the_irons: that's that 4k sector thing i was mentioning to you before mercutio: i c oh, you have to use gpt! i didn't realise standard partition tables didn't work with 3tb i thought it was just a bios issue i was using gpt previously (and zfs) actually it may have had no parititon table for all i know this is specific to Linux, but I don't see why it wouldn't work for FreeBSD: http://www.techrepublic.com/blog/opensource/chroot-users-with-openssh-an-easier-way-to-confine-users-to-their-home-directories/229 any FreeBSD users care to weigh in on that? i think openssh has some built in stuff i think some of thechrooting stuff was used for cvs ages ago mercutio: yep, > 2TB is one of the advantages of GPT this hard-disk is way slower than ssd :) but at least it's not really noisy when just doing an rsync *bsd systems have jails which probably work better than chroot mhttp://olivier.sessink.nl/jailkit/howtos_cvs_only.html something like that may help you you also should disable forwarding X11, forwarding ports etc http://www.thegeekstuff.com/2012/03/chroot-sftp-setup/ stuff like that may help too but i imagine people want to run rsync and tar and stuff steadfast give people shells on their backup server fwiw i think it was linux with remote storage up_the_irons: are you considering block size options (5gb, 10gb) etc for the local rsync backup or pay per usage pricing model? mnathani: i was thinking 20 cents per GB, minimum 5GB ($1) milki: if i'm just giving shell accounts for rsync, et. al. backups, i am under the misimpression that jails are too much? i would think chroot would be enough... mercutio: i always disable all the forwarding stuff ya, probably too much -.- though for project classes, we did do a debian vserver per student -.- i'm now a *paying* user of PagerDuty milki: nice i remember getting a Solaris shell account when I was at UCLA or maybe it was still SunOS back then, don't remember... i dont think we have solaris machines anymore probably when the licensing terms changed... grad students got Solaris *desktops* i thought that was particularly pimp lol i remember labs full of solaris desktops and dumb terminals hated those dumb terminals someone tell me there is an IPv6 capable monitoring service (like Pingdom) i will sign up right now... milki: hah default de was cde >.< yup :) maybe you should start one up_the_irons :) I will if heavysixer wants to help me PagerDuty's infrastructure is fully replicated in two different Amazon AWS data centers << nice sweet channel poll -- useful to anyone? http://stats.pingdom.com/31n57wtu0gs4 (i only have the 5 server plan, with 4 hosts configured) someone had asked for a public status page, so... w00t!! http://blog.uptimerobot.com/new-features-bulk-actions-ipv6-support/ ipv6! up_the_irons: can't find a pricing page. you? jbergstroem: it's free actually ah, so 50 is the limit. period. jbergstroem: until Feb 2013 they say ok up_the_irons: thansk for hte pingdom link. good to have for comparision -i jbergstroem: np jbergstroem: do u mean the uptimerobot link? interesting does that pingdom stuff work with multiple servers? phlux: what do u mean? jbergstroem: or do you mean my public status page on pingdom? would i need separate accounts to check the uptime of all of my vpses? phlux: if you sign up for pingdom? phlux: i'm on the 5 hosts plan, so i can check up to 5 hosts ah ok up_the_irons: ping up_the_irons: the public status page jbergstroem: np, it's not really official... i mean there are only 4 hosts jbergstroem: but it will do the job for those 4 hosts :) FreeBSD peeps, i need help -- mfsbsd# dmesg|grep igb igb0: port 0xe020-0xe03f mem 0xf7980000-0xf79fffff,0xf7a04000-0xf7a07fff irq 16 at device 0.0 on pci1 igb0: Using MSIX interrupts with 9 vectors ... How do I find out the model of that NIC? like, the exact model... pciconf -v IIRC and, is the 2.2.5 driver the official Intel one? (cuz I don't see it on their site, although I see a 2.2.3 one) pciconf -v will give you a vendor and hardware number. then you can go to pcidatabase.com or equiv and get exact model number ah cool let me know if that works. been emersed in CentOS for going on 3 years now. my BSD-fu is rusty it's probably an i350 i'm guessing so apparently the net in general has issues tonight common thread seems to be above.net having issues jpalmer: "-v" didn't work, but pciconf by itself gave some info. looked up stuff, said "SuperMicro". it is indeed a supermicro server, but could not find NIC. Found one "close". In the end, I just did a MB lookup and found the NIC (82580DB) up_the_irons: probably cos it's integrated mercutio: yup up_the_irons: maybe it was pciconf -lv? (try it and lemme know) if I ssh into this new FreeBSD box (bare metal, not vps), and cat a large text file, it freezes. I had this issue on kvr27 too (on Linux), before I updated the igb driver. The FreeBSD box also uses its own igb driver. i still don't understand how bgp is meant to work when providers accept traffic and don't forward it jpalmer: w00t, nice "-lv" does it :) and it seems to usually happen to "in the middle" providers ahh, thats good to know. I was like.. I KNOW it's pciconf! mercutio: yeah, host reachability is an issue mercutio: a bad peer will make your life hard 4. xe-2-3-0.cr1.lax112.us.above.net 0.3% 327 1.4 4.6 1.1 62.6 8.2 5. xe-4-1-0.cr1.iah1.us.above.net 46.3% 327 35.6 38.1 32.8 83.9 8.9 that's from arp but i tried from our network as well, after having total outage to a site shifting to different outbound and it still routed through above.net but worked just higher latency it seems mzima have a connection to above.net but i don;'t know how many above.net hosts are effected therre really should be health advertised through ok finding random route from bgp table shows it's more than one above.net destination Anyone else experiencing UDP Packetloss resulting from DNS Queries originating at an ARP VPS? not that i've noticed maybe try iperf at 2 megabit? iperf 2 megabit udp shows no packet loss for me mercutio: I would appreciate if you could try the following DNS query: dig @ns1.timewarner.net cnn.com it's going slow whatever it is # dig @ns1.timewarner.net cnn.com ; <<>> DiG 9.4.2-P2 <<>> @ns1.timewarner.net cnn.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached ;; Query time: 29 msec ;; SERVER: 204.74.108.238#53(204.74.108.238) from nz it was instant The same query works just fine from Washington as well as London UK but strangely so is ns1.timewarner.net anycast? not sure oh it is anycast it also doesn't respond to pings oh look od a mtr 12. ONE-SOURCE.edge1.SanJose3.Level3.net 85.7% 8 9.4 9.4 9.4 9.4 0.0 that'll be the issue umm http://internethealthreport.com/Main.aspx?Destination=Level3 look at that it seems level 3 have had issues recently anyway, they should have more than one nameserver on different networks even if using anycast and they don't so it's timewarner's fault as an interim measure you can use a secondary dns server on a different network although that doens't mean the site will work mercutio: Thanks for looking into this zsh 3016 # time curl http://www.cnn.com/ > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 127k 0 127k 0 0 1628k 0 --:--:-- --:--:-- --:--:-- 1750k but it is working in this instance I was testing a set of web tools for working with DNS ok anycast can be slightly confusing up_the_irons: wrt iscsi, I use netbsd-iscsi-target for stuff in my home office, I wouldn't recommend it for anything but iscsi initiators that can handle reconnecting upon occasion, you have to restart it to reconfig anything.. wrt a client who paid way to much, we have an equalogics array, talk about scalable in a big way, they can cluster the storage and even migrate 15mb chunks of disk into fast disks on different (independent) disk shelves and have man, how do I load a driver in FreeBSD that's already included in the kernel? mfsbsd# kldstat Id Refs Address Size Name up_the_irons: kldload? 1 8 0xffffffff80100000 e56d68 kernel 2 1 0xffffffff8227d000 11b28 ahci.ko 3 1 0xffffffff82412000 4d6e tmpfs.ko mfsbsd# kldload ./if_igb.ko oh kldload: can't load ./if_igb.ko: File exists mfsbsd# you mean you want to unload it and then reload it oh you don't compile it in mfsbsd is a read-only ISO so i can't change it... oh man this project just keeps getting more fucked... easy solution stick a pro1000 card in temp igb can cause issues for some people pro1000 is em driver and used to cause issues for some people but it's been around a long time and pretty stable now that's not real easy when the server isn't next to me ;) oh true that uhh any kernel boot param I can use to prevent igb from loading? and it's not easy to mount a different iso? i *can* mount a different ISO, yeah... maybe try 9.1-rc2? maybe i need to go back to the LiveCD it'll be stable soon anyway yeah.. i'll come back to this later heh aren't os installs fun mnathani: that timwarner dns server is working again on arp btw up_the_irons: chroot with sshd is how I've setup chroot envs before on OpenBSD. works great! I've even done chroot for sftp for uploading webstuffz... since chroot is a unix technique I can' see why it would not work on all flavors of unix mercutio: Thanks It is working fine now: http://dns.winvive.com/dns.php?Domain=cnn.com&QueryType=A you may want to use a secondary server to test doing the queries from too anyone else notice some weirdness on the internet (connection issues, dns failures) lately? The link above tries to find each authoritative server and tests each one static: yes :) not a lot but there's the issue mnathani raised with dns server not responding, and there was an issue with above.net having 50% packet loss ouch well actually it was 100% packet loss when i first noticed it the above net issue seemed to effect traffic from los angeles to dallas at least and mnathani's issue with dns was with someone hosting multiple dns servers in one location, rather than spreading them out well they were spreading htem out by using anycast, but each name serrver mapped to the same-not-working-location hmm yea which i'm sure is meant to be propogoated when you register a domain somebody on [outages] reported issues with ultradns to have mulitple name servers in different locations ultradns use different locations for primary/secondary they're at least smarter than "timewarner" (although they may not in some locations for all i know) ultradns has terrible routing from my country i emailed them about it didn't do any good it's like 300 msec away or something err to the primary it's like half that to the secondary new zealand eh yip and they're routing to hong kong i think ultradns is heavily anycasted but not enough for users in nz i suppose Anyone watch the debate? static: but the anycasting gives worse routing! that's the issue heh i think it gave some packet loss too what's their primary dns? i'm assuming their primary domain may not use it udns1.ultradns.net and udns2.ultradns.net. oh sweet it's fixed :) primary is like 26 msec secondary like 170 156.154.70.1 whatever it is is 145 i found that when googling but i think it's recursive 28 msec query time