[01:55] heh, thats a pretty great domain visinin [01:56] haha, thanks [01:56] i tried to get fre.sh while i was at it, but no luck [01:59] heh it looks like a .sh is like 100 usd [02:48] *** toddf_ has joined #arpnetworks [03:01] *** toddf has quit IRC (Read error: 110 (Connection timed out)) [03:22] forcefollow: i'm here [04:51] Yawn [04:56] up_the_irons: My VM clock is all over the place and ntpd seems not to be doing anything. Ideas? [04:58] mmm... strange [04:58] what ntpd is it exactly? openntpd? [04:58] i know openntpd will only slowly update a bad clock (so it doesn't jump) [04:59] Ah, perhaps I'm an idiot. I would have expected an error message if the config file did not exist, but instead it was running and doing nothing! [04:59] So I changed that. Let's see if my clock syncs up now. :) [05:01] haha [05:01] :) [05:32] mhoran: tried sending you a maintenance advisory (actually, to everyone): [05:32] A8566E8C59 1542 Thu Sep 10 05:24:31 garry@rails1.arpnetworks.com [05:32] (host vroom.ilikemydata.com[71.174.73.69] said: 450 4.7.1 : Recipient address rejected: Greylisted for 5 minutes (in reply to RCPT TO command)) [05:32] mhoran: so I hope you still get it [05:34] Should go through when the mailserver retries to send it [05:34] We do the same thing [05:34] *** Thorgrim1 is now known as Thorgrimr [05:34] Thorgrimr: ah, gotcha, cool [05:36] The idea being that spammers won't waste the time to come back and try again, but any decent MTA will [05:39] Dovecot killed itself when I synced my clock and Postfix got confused. Secondary MX, which does greylisting, answered because primary was down. Fun! [05:40] * mike-burns tries to convert maintenance window time into UTC then EDT, has to break out a calculator [05:42] mhoran: at least your secondary picked it up [05:42] mike-burns: should be something like 03:00 EDT [05:43] Thorgrimr: gotcha, interesting [05:43] whats the syntax to make unreal ircd bind to a range of ports. [05:43] I found a Yahoo Answers thread that converted 11:00 PST to EDT, amusingly. [05:43] mike-burns: was it correct? [05:43] obsidieth: not sure... [05:44] Yup! [05:44] nice [05:47] doh. [05:47] that was easy [05:59] No email for me :( [06:01] Thorgrimr: it's coming real soon (about 10 minutes) [06:01] for the record, i could not be more pleased with how this is working so far up_the_irons [06:02] RAD [06:02] obsidieth: glad you like it :) [06:03] up_the_irons: No worries, I'm at work now anyway, and teaching this afternoon, so no play for me [06:03] ah shucks [06:03] ;) [06:08] *** vtoms has quit IRC ("Leaving.") [06:12] Thorgrimr: ...and it's off! [06:14] Alrighty then :) [06:15] up_the_irons: So this ntpd issue is related to a Cacti issue I'm trying to track down. [06:15] I thought ntpd would keep it in sync, and now that it's set up correctly, it seems to be. [06:16] mhoran: your cacti or my cacti? :) [06:16] However, my cron tasks aren't running on time at all. [06:16] My cacti. [06:16] ah [06:16] 5 minute tasks are running, sometimes, over a minute late. [06:16] sounds like a cron issue [06:16] So my graphs are basically useless. [06:17] I figured it was because my clock wasn't synced and it was getting all confused, but it seems something else may be up. [06:17] Wondering if you've seen anything similar. [06:17] if the time is right, but cron doesn't execute on time, suspect cron. perhaps it needs to be restarted [06:18] i've seen time drift on VMs (pretty much across the board: Xen, KVM, VMware, etc...) [06:18] but ntpd pretty much keeps it in line [06:18] i haven't had any issues with cron though, as long as time is sync'd [06:18] Yeah. I've never seen this cron issue before. Didn't have it when running VMware, and my Xen boxes at work seem to be doing just fine. [06:18] Huh. [06:19] mhoran: what time does your VM show? [06:19] Thu Sep 10 09:18:22 EDT 2009 [06:19] as of now, the host says: [06:19] Thu Sep 10 06:19:20 PDT 2009 [06:20] i'm not sure how cron gets its time, from hardware clock, OS, or what.. probably through whatever stdlib C call provides that [06:22] *** heavysixer has joined #arpnetworks [06:22] Okay. I restarted cron, let's see what that does. [06:23] roger [06:23] heavysixer: how's it hangin [06:24] up_the_irons: yo [06:24] just getting ready to start working on digisynd's site again., [06:24] you? [06:25] heavysixer: provisioning VMs [06:25] gotcha [06:25] you are getting quite a few clients now huh? [06:25] Nope, still screwed up. Huh. [06:25] heavysixer: it's picking up [06:25] up_the_irons: That's quite the upgrade the server is getting! [06:26] mhoran: logs don't show anything? [06:26] mhoran: yeah, 16 GB of RAM is going in, and another quad-core Xeon @ 2.66GHz bad boy [06:26] heavysixer: haven't done Amber's VPS yet, had two orders in front of her; tell her not to hate me ;) [06:28] up_the_irons: no worries we are not at the point where we are ready to deploy. [06:28] heavysixer: cool [06:28] up_the_irons: soon though ;-) [06:28] up_the_irons: Will this bring it to dual quad core or quad quad core? [06:29] so no slacking [06:29] heavysixer: oh it's gonna be up today, for sure. :) [06:29] up_the_irons: cool [06:29] mhoran: dual quad [06:30] That's exciting. [06:30] mhoran: really interested to see how load avg goes down with the addition of more cores to distribute the load [06:30] My static Web site will certainly benefit from the power! [06:30] LOL [06:30] Yeah. [06:30] if the load avg drops in half, that'd be awesome utilization of the cores [06:30] Oh totally. [06:32] omg, now I know how to say "Sent from my iPhone" in Japanese [06:32] iPhoneから送信 [06:32] Hahaha. [06:32] saw that on the bottom of a new customer's email [06:32] (who is in Japan) [06:33] Huh. So my clock seems to be fine, but these tasks are not running 5 minutes apart. They're all over the place. [06:33] It's almost like the scheduler is not synced with the clock or something. [06:35] mhoran: you should do like: [06:36] */1 * * * * root uptime [06:36] erm [06:36] */2 [06:36] or w/e it is [06:36] Yeah. [06:36] so it emails you something simple [06:36] see if you get the emails on-time [06:36] and at regular intervals [06:36] Got it in there now. We'll see. :) [06:37] cool [06:37] That's a good way to make me feel loved! [06:37] Send myself e-mail. [06:37] LOL [06:50] So I have this set to run every minute. It ran at 9:46, then 9:49. Skipped everything in between. Interesting. [06:51] Sep 10 09:46:20 friction /usr/sbin/cron[25023]: (mhoran) CMD (/bin/date) [06:51] Sep 10 09:49:05 friction /usr/sbin/cron[25051]: (mhoran) CMD (/bin/date) [06:51] Didn't even try to run it in between. [06:53] mhoran: did you use "*/2", i think that's every other minute [06:54] * * * * * [06:55] heh [06:55] let's see, on one of my Linux VMs, I have: [06:55] ep 9 06:20:01 ice /USR/SBIN/CRON[12324]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null) [06:55] Sep 9 06:25:02 ice /USR/SBIN/CRON[12400]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) [06:55] Sep 9 06:30:01 ice /USR/SBIN/CRON[12430]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null) [06:55] so that's pretty much right on every 5 minutes [06:55] let me find a FreeBSD one... [06:55] Yeah. I'm drifting way past the minute. Interesting. [06:58] Sep 10 06:35:36 freebsd-ha /usr/sbin/cron[19804]: (root) CMD (/usr/libexec/atrun) [06:58] Sep 10 06:41:08 freebsd-ha /usr/sbin/cron[19807]: (root) CMD (/usr/libexec/atrun) [06:58] Sep 10 06:46:25 freebsd-ha /usr/sbin/cron[19823]: (root) CMD (/usr/libexec/atrun) [06:58] Sep 10 06:50:36 freebsd-ha /usr/sbin/cron[19826]: (root) CMD (/usr/libexec/atrun) [06:58] Sep 10 06:56:08 freebsd-ha /usr/sbin/cron[19833]: (root) CMD (/usr/libexec/atrun) [06:58] atrun is supposed to run every 5 minutes [06:58] and look at that, cron is like being lazy about it [06:58] it's "about" every 5 minutes [06:59] the time delta is pretty close to 5 minutes, but it's not executing "on the dot" [07:01] Yeah. That's what's upsetting cacti. [07:01] Hrm. [07:01] Thu Sep 10 09:56:52 EDT 2009 [07:02] That one was almost a minute late! [07:02] i wonder if cron is seeing a different time [07:11] [25430] TargetTime=1252591500, sec-to-wait=36 [07:11] [25430] sleeping for 36 seconds [07:11] [25430] TargetTime=1252591500, sec-to-wait=-115 [07:11] Interesting. [07:33] whoa, weird [07:47] I received two maintenance notices, identical, except for 'Message-Id:...@rails1.arpnetworks.com>' vs 'Message-ID: ..@garry-thinkpad.arpnetworks.com>\nUser-Agent: Mutt/1.5.16..' [07:48] just fwiw ;-) [07:51] *** toddf_ is now known as toddf [07:53] *** vtoms has joined #arpnetworks [07:59] toddf: There was an error when sending the first one, so to be safe I sent it again from just my regular client (mutt) [07:59] toddf: looks like you got them both anyway :) [08:00] absolutely time for me to expire, thankfully i slept a little already [08:00] cd $bed [08:01] ;-) [08:31] [mike@jack] ~% date; sleep 1; date [08:31] Thu Sep 10 11:30:44 EDT 2009 [08:31] Thu Sep 10 11:30:47 EDT 2009 [08:31] 10 minutes later, [08:31] [mhoran@friction] ~% date; sleep 300; date [08:31] Thu Sep 10 11:20:10 EDT 2009 [08:32] On my work laptop, [08:32] [mhoran@mhoran-thinkpad] ~% date; sleep 60; date [08:32] Thu Sep 10 11:29:22 EDT 2009 [08:32] Thu Sep 10 11:30:22 EDT 2009 [08:32] So, something is up. [08:34] *** vtoms has quit IRC (Remote closed the connection) [08:39] *** vtoms has joined #arpnetworks [08:48] [mhoran@friction] ~% date; sleep 300; date [08:48] Thu Sep 10 11:20:10 EDT 2009 [08:48] Thu Sep 10 11:41:10 EDT 2009 [09:26] *** greg_dolley has joined #arpnetworks [09:26] hello! [09:39] *** ballen has joined #arpnetworks [09:39] 30 minutes of downtime eh? [09:42] ballen: You running FreeBSD? [09:42] ya [09:42] Been trying to diagnose some issues with cron, which seem to trace back to sleep(), which may even be scheduler related. [09:43] Does date; sleep 60; date act as expected for you? [09:43] rgr chking [09:43] When I ran it, sleep 300 waited for took 21 minutes. [09:44] up_the_irons: when you get in let me know, would like to discuss expectations of uptime, etc and so forth [09:47] mhoran: yea its all sorts of messed up [09:47] [ballen@arp ~]$ date; sleep 10; date [09:47] Thu Sep 10 12:46:29 EDT 2009 [09:47] Thu Sep 10 12:47:09 EDT 2009 [09:47] 7.2? [09:47] ya [09:47] Yeah. Something is definitely borked. [09:47] I noticed because my 5 minute Cacti cron has been complaining for months. :) [09:48] does 7.2 use the new scheduler [09:48] I ran cron in debug mode and saw that it had a negative sec-to-wait. So then I tested sleep, which is exhibiting the same behavior. [09:48] Yes, it does. [09:48] Not totally sure if it's scheduler related, or something else. [09:49] But, something is definitely busted. [09:49] yep [09:49] Hopefully up_the_irons can help us figure it out. [09:49] May need to mail the FreeBSD lists as well. [09:49] Probably after work. It's crazy today. [09:49] yea [09:50] just woke up from working on thesis till 4am last night [09:50] hopefully no one at work misses me [09:51] whats sleep use to tell time? [09:52] nanosleep() is the syscall. [09:55] hmm, yea really don't feel like figure this one out at the moment. Let me know if figure out anything. My gut feeling is it has to do with KVM/Qemu [09:55] likely how nanosleep is counting time [09:56] and how KVM is sharing cycles [09:56] brb need coffeee [10:11] *** ballen is now known as ballen|away [10:14] *** heavysixer has quit IRC (Read error: 145 (Connection timed out)) [10:21] *** heavysixer has joined #arpnetworks [10:58] *** greg_dolley has quit IRC (Read error: 110 (Connection timed out)) [11:03] *** ballen|away is now known as ballen [11:07] *** ballen has quit IRC (Remote closed the connection) [12:14] mhoran: here's what I have from a Linux VM: [12:14] garry@ice:~ $ date; sleep 1; date [12:14] Thu Sep 10 12:12:07 PDT 2009 [12:14] Thu Sep 10 12:12:08 PDT 2009 [12:14] garry@ice:~ $ date; sleep 20; date [12:14] Thu Sep 10 12:12:15 PDT 2009 [12:14] Thu Sep 10 12:12:35 PDT 2009 [12:14] the host box is the same [12:15] on FreeBSD it is jacked: [12:15] [arpnetworks@freebsd-ha ~]$ date; sleep 1; date [12:15] Thu Sep 10 12:15:37 PDT 2009 [12:15] Thu Sep 10 12:15:40 PDT 2009 [12:15] Good to know. Looks like all of us on FreeBSD are experiencing this. [12:15] Do you have something that's not 7.2 (the old scheduler)? [12:16] mhoran: i believe I do, but it's stopped right now cuz i ran out of RAM (hence the maintenance tonight :) [12:16] w00t, OpenBSD still rocks it: [12:16] Heh. Okay. [12:16] s3.lax:~> date; sleep 20; date [12:16] Thu Sep 10 12:01:14 PDT 2009 [12:17] Thu Sep 10 12:01:34 PDT 2009 [12:17] s3.lax:~> date; sleep 1; date [12:17] Thu Sep 10 12:01:39 PDT 2009 [12:17] Thu Sep 10 12:01:40 PDT 2009 [12:17] s3.lax:~> date; sleep 1; date [12:17] Thu Sep 10 12:01:41 PDT 2009 [12:17] Thu Sep 10 12:01:42 PDT 2009 [12:17] Interesting. [12:18] given OpenBSD is probably the least virtualized OS, and it is working, I'd have to point the finger at FreeBSD on this one, instead of KVM/QEMU. however, it probably has to do with the interaction of the two [12:19] Yeah. I did not have this problem with VMware. [12:19] that OpenBSD VM is on the same host too [12:19] mhoran: you tried 7.2 w/ VMware? [12:20] Ooh, this is 7.1. [12:20] I think I have a 7.2 running somewhere. [12:20] 7.1/ESXi -- [12:20] vps% date; sleep 20; date [12:20] Thu Sep 10 15:19:15 EDT 2009 [12:20] Thu Sep 10 15:19:35 EDT 2009 [12:20] vps% date; sleep 1; date [12:20] Thu Sep 10 15:19:38 EDT 2009 [12:20] Thu Sep 10 15:19:39 EDT 2009 [12:21] vps% date; sleep 1; date [12:21] Thu Sep 10 15:19:40 EDT 2009 [12:21] Thu Sep 10 15:19:41 EDT 2009 [12:21] So that's good. [12:21] I'll play with it more tonight around the maintenance window; I'll have a lot of time to kill then [12:22] Yeah, 7.2/ESX is fine. Same as 7.1. [12:22] ah ok [13:05] *** greg_dolley has joined #arpnetworks [13:11] *** heavysixer has quit IRC () [13:18] greg_dolley: welcome to IRC [13:20] thx :-) [13:28] greg_dolley: haha, yo greg [13:28] this is andy from revver, not sure if you remember me [13:33] up_the_irons: Do you have a machine you can test this on? Apparently adding hint.apic.0.disabled="1" may fix this. [13:34] mhoran: machine = FreeBSD VM? [13:34] cablehead: he must be at lunch... [13:34] up_the_irons: Yes. [13:35] mhoran: sure, where should I put that? in sysctl.conf? [13:35] Oh, I left out -- adding ... to /boot/loader.conf [13:35] ah ah [13:35] up_the_irons: either that or rocking out to some thumping metal [13:35] cablehead: true! [13:37] mhoran: does this look right: [13:37] [arpnetworks@freebsd-ha ~]$ cat /boot/loader.conf [13:37] hint.apic.0.disabled="1" [13:37] [arpnetworks@freebsd-ha ~]$ [13:37] That should be it. [13:37] ok, rebooting... [13:45] cablehead: hey! I remember you ;-) [13:50] man [13:50] i dont thin i'll ever get a freebsd vps [13:50] Don't say that! We'll get to the bottom of this ... [13:50] Aside from that, it works great! [13:51] Yeah, no complaints from me. I don't do a lot of sleep-related work. [13:51] noo [13:51] not cause of that [13:52] when i use bsd, i use it for serious thing.. i build things by hand, or ports [13:52] never packages. [13:55] jeev: so what's the prob? ;) with ports, you can install everything from source, that's one cool thing about it [13:56] yea [13:56] mhoran: ok, so, had trouble with that line. it won't find the disk for some reason. I had to go into boot loader and do 'unset hint.apic.0.disabled' and then it booted fine [13:56] except, vps.. = slow ;) [13:57] *** vtoms has quit IRC ("Leaving.") [13:57] Interesting. [13:58] mhoran: you want to play with my test VM? [13:58] i could give you login and console [13:58] Sure, probably have some time after work ... [13:59] jeev: Haven't found my VPS to be slow. At work, we run everything virtualized, and it's fine. [13:59] ok, just don't trash it too much, I use it for DRBD testing at the moment :) [14:00] Heh, okay. No worries. [14:03] jeev: My ARP Networks VPS is faster than my laptop much of the time. [14:04] faster? w00t [14:04] * up_the_irons pets his new Intel Xeon E5430 [14:17] *** visinin has quit IRC ("sleep") [14:20] eh [14:20] i mean like [14:20] how often can you build world. [14:20] i want a peer1 LA colo [14:20] for cheap. [14:29] jeev: peer1 ain't cheap i hear [15:13] yea [15:13] there was someone on wht [15:13] who did colo for like 70 or somthing [15:13] i forgot what bandwidth [15:13] but he told me he's leaving it soon [15:15] *** ballen has joined #arpnetworks [15:16] up_the_irons: you on [15:16] ballen: yo [15:17] 30 minutes of downtime seems a bit long no [15:17] ? [15:17] ballen: it is, but what can I do; I have RAM and a CPU to install, and if I rush it, something could break, and then the downtime would be much greater [15:18] ballen: if it was just RAM, it'd be a lot quicker [15:18] no way to migrate vm's? [15:19] ballen: it would take longer than 30 minutes ;) a 'dd' from LVM to disk, then transfer to another server, and 'dd' from disk back to LVM <-- takes some time as well, and you're down the whole time [15:19] sigh.... [15:20] ballen: my ultimate goal is to have the VM disk images to be on a DRBD volume, if performance turns out to be still good [15:20] ballen: i'm currently testing this [15:20] SAN storage...that fix it all [15:21] so centrally store the images, and do diskless botting on the Qemu [15:21] booting even [15:21] ballen: and then it would be possible to "live" migrate and if everything goes right, there would be no downtime at all [15:21] Nat_UB: SAN storage is very expensive [15:21] Tell me about it...doing that at two sites at $WORK [15:21] doesn't DRBL use NFS ? [15:23] ballen: DRBD would be "central" store (more like, two boxes get paired), and it is trivial to boot off it. That's a solved problem, but there are performance issues to account for [15:23] yea I've used DRBL a year ago [15:23] to deploy a 80+ machine lab [15:23] ballen: DRBD is a distributed block device; not related to NFS [15:23] awwww [15:23] Diskless Remote Booting Linux [15:23] hah [15:24] LOL [15:24] anywho [15:24] n/m [15:24] ;) [15:24] ballen: trust me, I feel your pain, I have several important VMs of my own that are going down (arpnetworks.com site itself, pledgie.com, my shared hosting server) [15:25] whats the need for the new hardware, obviously other than increasing capacity. Couldn't just buy a new server? [15:25] ballen: I'm going to try to be as quick as possible; and once I certify the DRBD setup I'm testing currently, I will those who want to be on that, go on it. [15:25] ballen: Giving him hell huh? [15:26] a little [15:26] just 30 minutes downtime suuuucks [15:26] :) [15:26] but understandable I suppose [15:26] In this case I'm still building....so downtime no concern for me hehehehe [15:27] I work in the 'NO DOWNTIME' field...so I've heard all the griping before....Irons, keep up the good work! [15:27] ballen: the "just" in "just buy a new server" is the hard part ;) I don't buy cheap boxes, I have to shell out about $6K, and that just isn't gonna happen given I can double the cores on the current box *and* double the RAM [15:27] on the current box [15:27] up_the_irons: does DRBD do synchronous writes? [15:28] up_the_irons: well you should have thought of that ahead of time :-p [15:28] ballen: "DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based" [15:28] yes yes [15:28] ballen: dude, to be honest, I was like this: [15:28] when you write to one side of mirror [15:28] does it wait till the other side to finish [15:29] "I don't know how well my new VPS offering will sell, so let's not buy both CPUs and 32 GB of RAM off the bat, start with 1 CPU and 16 GB of RAM and upgrade later" <-- now that bites me in the ass :) [15:29] wondering if that is the source of performance issues [15:29] ballen: OK, gotcha. that part of it is configurable [15:30] ballen: I configure it to wait for the write on the other end; it has to be very consistent [15:30] up_the_irons: yea I figured that was your train of thought. Just giving you a hard time [15:30] yea [15:30] async without conflict resolution is a piss poor idea [15:30] ballen: my thinking is, i'd rather sacrifice performance than have a catastrophic failure [15:31] althought [15:31] although* [15:31] if you think of it in a master -> slave configuration [15:31] where the slave will never write [15:31] ballen: where it won't matter much is in reads, cuz reads will come off the local disk; which is kinda an advantage over an external SAN setup [15:31] whats wrong with doing writes asynchronously [15:33] ballen: well, master box crashes while writing to local disk, yet that block isn't replicated on the slave? I think that would be a problem [15:33] hmm [15:34] I guess its a matter of not allowing it to get too far out of sync [15:34] and allowing a certain amount of time for lost data [15:34] whatever one would be confortable with [15:35] ballen: yeah, I think the #1 goal with DRBD is for the data to never go out of sync; but it gives you some knobs to play with [15:35] so how much performance loss are you seeing using it [15:36] i'm not to the hard benchmark part yet; more like "play with the machine; does it feel slow?" [15:36] so far, i can't really tell the different [15:36] ah [15:36] *difference [15:36] cool, what kind of network do you have between the machines [15:37] He's got 10gig... :) [15:37] ballen: 1G [15:37] just one link per box? [15:37] ah [15:38] dedicated to task? [15:38] ballen: right now i actually have two boxes physically linked together, no switch in between [15:38] ah [15:39] ballen: if I got more NICs, I could bond them, and I hear the network speed would be faster than disk write speed and then performance issues are mute; but I want to *see* this in action before I certify it [15:40] Nat_UB: i wish i had 10G :) [15:40] the intel ones are like $2K a pop [15:40] yea good plan. There may be some overhead in bonding. Does DRBD run over TCP/IP [15:40] ballen: yes, it does run over TCP/IP [15:40] Haven't tried 10g but done some bonded stuff [15:41] AoE (ATA over Ethernet) is another alternative, that runs on layer 2, but is pretty much feature-less and does not afford much protection to someone accidentally writing to the volume from two boxes at the same time (which will instantly corrupt it) [15:41] I use AoE for backup images only [15:42] but that said, AoE is pretty cool in its simplicity [15:42] yea AoE is pretty neat [15:43] i got this from a 'dd' test on my FreeBSD DRBD testing VM: [15:43] 1048576000 bytes transferred in 76.882014 secs (13638769 bytes/sec) [15:43] so like 13 MBps [15:43] not all that great [15:43] but those are real writes, not cached [15:44] yea, whats that same benchmark on a typical FreeBSD VM [15:45] let me see [15:47] this is what i'm running: [15:47] dd if=/dev/zero of=delete-me bs=1M count=2000 [15:47] you want to make sure 'count' is about double your RAM, so caching goes away [15:50] now, the performance of 'dd' will give some raw numbers that may or may not correlate with how the VM actually performance during normal use; that would depend on a lot of other factors. Even if dd has lower speeds on DRBD, the trade-off for uptime and easy of VM migration may well be worth it [15:51] true [15:53] I'll be on later, and will be around during the maintance window. Let me know if you make any breakthroughs with DRBL. [15:54] *** ballen has quit IRC ("Bye!") [15:54] ballen: will do! [15:54] Nat_UB: thanks for the support up there, BTW ;) [15:55] Sure thing! U'r the man! [15:55] :) [16:25] *** greg_dolley has quit IRC () [16:32] *** heavysixer has joined #arpnetworks [18:05] *** vtoms has joined #arpnetworks [18:32] *** heavysixer has quit IRC () [18:36] *** vtoms has quit IRC ("Leaving.") [19:05] *** Nat_UB has quit IRC (bartol.freenode.net irc.freenode.net) [19:06] *** Nat_UB has joined #arpnetworks [19:14] *** heavysixer has joined #arpnetworks [19:48] we need more IRC'ers [19:48] * up_the_irons goes back to editing build logs [20:13] *** heavysixer has quit IRC (Read error: 104 (Connection reset by peer)) [20:30] build bots [20:30] ;) [20:32] no, real people :) [20:39] *** timburke has quit IRC (Remote closed the connection) [20:42] *** timburke has joined #arpnetworks [20:56] i think i might have a recruit for you [21:15] * up_the_irons does happy clappy hands [21:16] obsidieth: nice :) [21:30] obsidieth: someone you know? or found on a message board? [21:31] btw, guys, let me know if there are other boards out there besides WHT that have an advertising section; I post weekly on WHT and recently found webhostingboard.net. If there are others, I'd like to know :) [21:33] cd $data-center [21:33] i will advertise you for $1000/month on my google PR1 [21:33] jeev: LOL [21:33] ;) [21:33] palm pre is so lame [21:45] *** ballen has joined #arpnetworks [21:45] ping [21:47] yeah someone i know [21:47] people on efnet arent used to servers that actually stay up:p [21:48] huh [22:29] efnet, heh [22:29] back in the day [22:29] efnet sucks [22:29] lol, DRBD is now following you on Twitter! [22:29] "DRBD is now following you on Twitter!" [22:29] that is [22:30] heh [22:30] looking at espresso machines, and grinders [22:30] is dropping a grand on an espresso machine + grinder a good thing? [22:30] wow [22:31] i'd rather buy a good 48 port GB switch for that [22:31] and for a grand, even that is hard to find [22:31] heh [22:31] have a good gig switch already [22:31] don't use it [22:31] * up_the_irons motions ballen to hand it over [22:32] I keep my computer equipment at home very light [22:32] to keep power down [22:32] yeah, i don't have much at home [22:32] i keep it all at the data center :) [22:32] its just a netgear [22:32] shit i just bought a dell poweredge [22:32] 48 port gig and i haven't even sent it to the datacenter [22:32] there's some new cage here where the fool has like 30 power circuits in 200 sq. ft. [22:33] i was like "wwhhhhhhhhaaaaa?" [22:33] that must be me [22:33] LOL [22:33] any idea what amperage? [22:33] i was a hazard at uscolo [22:33] they would detour people around my stuff during tours [22:33] i hate dell management interface on their f*cking switches, but they are priced well [22:33] yea i aint gonna use the management stuff [22:33] just cli [22:33] http://www.netgear.com/Products/Switches/FullyManaged10_100_1000Switches/GSM7212.aspx [22:33] it's some ieee standard [22:33] my home switch heh [22:34] ballen: looks like 20 ampers each; they must be in redundant A/B pair, cuz I can't imagine they actually gave him all that power [22:34] damn [22:34] ballen: i've heard some good things about the GSM [22:34] ironically [22:34] brb [22:34] k [22:34] yea its a solid switch, but I really haven't had to do much with it [22:35] Grinder: http://www.visionsespresso.com/node/73 [22:37] Espresso Machine: http://www.wholelattelove.com/Rancilio/ra_silvia_2009.cfm [22:38] why do you need that [22:38] its more of a question of why do I not need that [22:38] as well as a small coffee addiction [22:39] lol [22:40] I really enjoy espresso drinks, and it would save me money in the long run if I don't goto any cafe [22:40] $3 latte once a day [22:41] that's the point of coffee or drinks [22:41] LOL [22:41] to go into the place [22:41] is 1095 bucks [22:41] and see how bitches [22:41] hot [22:41] hahaha [22:41] up_the_irons wishes he could get the girls from glendale! [22:41] lmao [22:41] well some are fugly [22:41] but some are hot [22:42] the chicks in glendale, yeah some are pretty hot [22:42] sexUAL [22:42] def some hot females at some various cafes I've been into [22:43] also cafe is like 15 minutes away [22:43] and in the morning, F that! [22:44] shit [22:44] sounds like you're from Charlevoix, MI [22:44] when you say it's 15 min away [22:44] ya [22:45] wow, what a guess! [22:45] hah [22:45] ballen, gto a linux or bsd router on your cable ? [22:45] so this is a fun thing [22:45] actually I'm behind a WISP [22:46] who uses AT&T, and Charter [22:46] ahh [22:46] i was gonna say [22:46] sniff me some mac addresses [22:46] i steal cable internet sometimes [22:46] hah [22:46] although i have a primary isp [22:46] i prefer stealing charter, 20 megs sometimes [22:46] their network sucks [22:46] T - 15 minutes [22:51] ironically, the time right before a maintenance window is a time where I actually wait and do nothing (I've already prepared), so now I'm just waiting for the clock to strike the right time [22:51] weird [22:51] what are you gonna do [22:51] i didn't read it [22:51] yep, always annoying period of time [22:51] cuz I could start now, but I told everyone 11, so I must wait [22:52] jeev: RAM + CPU upgrade [22:52] on everything ? [22:52] jeev: just one box, but it holds the majority of the guys in here [22:52] is arpnetworks just one box >? :) [22:52] i dont mind really [22:52] so far, stable as fark [22:52] jeev: no, i have several, but the one in question is my newest [22:52] cool [22:53] so how is CPU split [22:53] is it burst for everyone ? [22:53] no burst [22:53] if you ordered 1 cpu, you get 1 cpu [22:53] then what [22:53] ahh [22:53] how many cpu's in the box i'm on [22:53] some guys are running SMP, but not many [22:53] jeev: 1 CPU, quad core [22:54] so i'm considered a one cpu user [22:54] since cpuinfo shows me a single cpu [22:54] so I actually have a core all my to myself? [22:55] so the server i'm on has 4 cores right now [22:55] so 4 users? heh [23:00] ok guys, T minus 1 minute [23:00] i'll get disconnected [23:00] k have fun, don't break anything ;-) [23:01] *** mike-burns has quit IRC ("WeeChat 0.2.6.3") [23:41] *** [FBI] starts logging #arpnetworks at Thu Sep 10 23:41:06 2009 [23:41] *** [FBI] has joined #arpnetworks [23:41] see I just had to swear [23:42] heh [23:43] is that personal stop logging ? [23:43] hmm [23:43] I'd assume so [23:43] [FBI]: off [23:43] *** [FBI] has left [23:45] *** [FBI] starts logging #arpnetworks at Thu Sep 10 23:45:00 2009 [23:45] *** [FBI] has joined #arpnetworks [23:45] and hes back [23:45] w00000000000000000000000000000000t [23:45] what did I miss? [23:45] I'll show you what you guys missed: [23:45] total used free shared buffers cached [23:45] Mem: 31856 15133 16723 0 2905 64 [23:45] look at "free" :) [23:45] nice [23:45] up_the_irons, i cancelled while you were gone. [23:45] lol just kidding [23:45] lmao [23:45] ahahahah [23:45] up_the_irons, so the server im' on has only 4 cores? [23:45] and now this: [23:45] garry@kvr02:~$ cat /proc/cpuinfo | grep 'model name' [23:45] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:45] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] * up_the_irons slaps jeev with a trout [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz [23:46] jeev: now it has 8 :) [23:46] [FBI]: welcome back [23:46] but... [23:46] it had 4, so 4 customers? [23:46] * jeev pokes up_the_irons [23:47] jeev: no no, customers can share cores [23:47] oh [23:47] so how many cores do i have [23:47] just one, is it dedicated [23:47] jeev: there is no setting that says "this VM gets this core", although I *can* do that, just haven't. I let the Linux KVM/QEMU scheduler put the VMs on the least loaded core in real time [23:48] ok [23:48] jeev: nobody has a dedicated core; i don't think my business model could support that. cores are the least numerous resource [23:48] RAM is easier [23:49] disk is easiest [23:49] yea [23:49] i dont care realy [23:51] 41 minutes cutting it a bit close weren't we :-p [23:51] heh, i just took a video of all the blinking lights on this box, w/ my iphone [23:51] ballen: oh yeah man, I failed that one hard [23:51] ballen: the RAM did not take at first, still registered 16GB, not 32 [23:51] ah, tough day [23:51] ballen: I had to unrack box and put them in different channels [23:52] :( [23:53] ballen: i'm just happy everything went OK; ya never know when opening boxes and tinkering around [23:53] cutsman: whoa, who are you? :) [23:53] my Nagios is all green! [23:53] yea, I tend to avoid doing such things to production boxes [23:54] ballen: I really try not to also; but sometimes is unavoidable. This will be the last time major maintenance is done on a box before I get DRBD live migrations working; then it will be a mute point [23:55] sounds good [23:55] *** obsidieth has joined #arpnetworks [23:55] *** cutsman has quit IRC ("leaving") [23:56] it is i [23:56] don don don [23:56] wonder who cutsman was [23:56] obsidieth: yo