↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When |
---|---|---|
obsidieth | heh, thats a pretty great domain visinin | [01:55] |
visinin | haha, thanks
i tried to get fre.sh while i was at it, but no luck | [01:56] |
obsidieth | heh it looks like a .sh is like 100 usd | [01:59] |
.......... (idle for 49mn) | ||
*** | toddf_ has joined #arpnetworks | [02:48] |
toddf has quit IRC (Read error: 110 (Connection timed out)) | [03:01] | |
..... (idle for 21mn) | ||
up_the_irons | forcefollow: i'm here | [03:22] |
.................. (idle for 1h29mn) | ||
Thorgrim1 | Yawn | [04:51] |
mhoran | up_the_irons: My VM clock is all over the place and ntpd seems not to be doing anything. Ideas? | [04:56] |
up_the_irons | mmm... strange
what ntpd is it exactly? openntpd? i know openntpd will only slowly update a bad clock (so it doesn't jump) | [04:58] |
mhoran | Ah, perhaps I'm an idiot. I would have expected an error message if the config file did not exist, but instead it was running and doing nothing!
So I changed that. Let's see if my clock syncs up now. :) | [04:59] |
up_the_irons | haha
:) | [05:01] |
....... (idle for 31mn) | ||
mhoran: tried sending you a maintenance advisory (actually, to everyone):
A8566E8C59 1542 Thu Sep 10 05:24:31 garry@rails1.arpnetworks.com (host vroom.ilikemydata.com[71.174.73.69] said: 450 4.7.1 <matt@matthoran.com>: Recipient address rejected: Greylisted for 5 minutes (in reply to RCPT TO command)) mhoran: so I hope you still get it | [05:32] | |
Thorgrim1 | Should go through when the mailserver retries to send it
We do the same thing | [05:34] |
*** | Thorgrim1 is now known as Thorgrimr | [05:34] |
up_the_irons | Thorgrimr: ah, gotcha, cool | [05:34] |
Thorgrimr | The idea being that spammers won't waste the time to come back and try again, but any decent MTA will | [05:36] |
mhoran | Dovecot killed itself when I synced my clock and Postfix got confused. Secondary MX, which does greylisting, answered because primary was down. Fun! | [05:39] |
mike-burns | mike-burns tries to convert maintenance window time into UTC then EDT, has to break out a calculator | [05:40] |
up_the_irons | mhoran: at least your secondary picked it up
mike-burns: should be something like 03:00 EDT Thorgrimr: gotcha, interesting | [05:42] |
obsidieth | whats the syntax to make unreal ircd bind to a range of ports. | [05:43] |
mike-burns | I found a Yahoo Answers thread that converted 11:00 PST to EDT, amusingly. | [05:43] |
up_the_irons | mike-burns: was it correct?
obsidieth: not sure... | [05:43] |
mike-burns | Yup! | [05:44] |
up_the_irons | nice | [05:44] |
obsidieth | doh.
that was easy | [05:47] |
Thorgrimr | No email for me :( | [05:59] |
up_the_irons | Thorgrimr: it's coming real soon (about 10 minutes) | [06:01] |
obsidieth | for the record, i could not be more pleased with how this is working so far up_the_irons | [06:01] |
up_the_irons | RAD
obsidieth: glad you like it :) | [06:02] |
Thorgrimr | up_the_irons: No worries, I'm at work now anyway, and teaching this afternoon, so no play for me | [06:03] |
up_the_irons | ah shucks
;) | [06:03] |
*** | vtoms has quit IRC ("Leaving.") | [06:08] |
up_the_irons | Thorgrimr: ...and it's off! | [06:12] |
Thorgrimr | Alrighty then :) | [06:14] |
mhoran | up_the_irons: So this ntpd issue is related to a Cacti issue I'm trying to track down.
I thought ntpd would keep it in sync, and now that it's set up correctly, it seems to be. | [06:15] |
up_the_irons | mhoran: your cacti or my cacti? :) | [06:16] |
mhoran | However, my cron tasks aren't running on time at all.
My cacti. | [06:16] |
up_the_irons | ah | [06:16] |
mhoran | 5 minute tasks are running, sometimes, over a minute late. | [06:16] |
up_the_irons | sounds like a cron issue | [06:16] |
mhoran | So my graphs are basically useless.
I figured it was because my clock wasn't synced and it was getting all confused, but it seems something else may be up. Wondering if you've seen anything similar. | [06:16] |
up_the_irons | if the time is right, but cron doesn't execute on time, suspect cron. perhaps it needs to be restarted
i've seen time drift on VMs (pretty much across the board: Xen, KVM, VMware, etc...) but ntpd pretty much keeps it in line i haven't had any issues with cron though, as long as time is sync'd | [06:17] |
mhoran | Yeah. I've never seen this cron issue before. Didn't have it when running VMware, and my Xen boxes at work seem to be doing just fine.
Huh. | [06:18] |
up_the_irons | mhoran: what time does your VM show? | [06:19] |
mhoran | Thu Sep 10 09:18:22 EDT 2009 | [06:19] |
up_the_irons | as of now, the host says:
Thu Sep 10 06:19:20 PDT 2009 i'm not sure how cron gets its time, from hardware clock, OS, or what.. probably through whatever stdlib C call provides that | [06:19] |
*** | heavysixer has joined #arpnetworks | [06:22] |
mhoran | Okay. I restarted cron, let's see what that does. | [06:22] |
up_the_irons | roger
heavysixer: how's it hangin | [06:23] |
heavysixer | up_the_irons: yo
just getting ready to start working on digisynd's site again., you? | [06:24] |
up_the_irons | heavysixer: provisioning VMs | [06:25] |
heavysixer | gotcha
you are getting quite a few clients now huh? | [06:25] |
mhoran | Nope, still screwed up. Huh. | [06:25] |
up_the_irons | heavysixer: it's picking up | [06:25] |
mhoran | up_the_irons: That's quite the upgrade the server is getting! | [06:25] |
up_the_irons | mhoran: logs don't show anything?
mhoran: yeah, 16 GB of RAM is going in, and another quad-core Xeon @ 2.66GHz bad boy heavysixer: haven't done Amber's VPS yet, had two orders in front of her; tell her not to hate me ;) | [06:26] |
heavysixer | up_the_irons: no worries we are not at the point where we are ready to deploy. | [06:28] |
up_the_irons | heavysixer: cool | [06:28] |
heavysixer | up_the_irons: soon though ;-) | [06:28] |
mhoran | up_the_irons: Will this bring it to dual quad core or quad quad core? | [06:28] |
heavysixer | so no slacking | [06:29] |
up_the_irons | heavysixer: oh it's gonna be up today, for sure. :) | [06:29] |
heavysixer | up_the_irons: cool | [06:29] |
up_the_irons | mhoran: dual quad | [06:29] |
mhoran | That's exciting. | [06:30] |
up_the_irons | mhoran: really interested to see how load avg goes down with the addition of more cores to distribute the load | [06:30] |
mhoran | My static Web site will certainly benefit from the power! | [06:30] |
up_the_irons | LOL | [06:30] |
mhoran | Yeah. | [06:30] |
up_the_irons | if the load avg drops in half, that'd be awesome utilization of the cores | [06:30] |
mhoran | Oh totally. | [06:30] |
up_the_irons | omg, now I know how to say "Sent from my iPhone" in Japanese
iPhoneから送信 | [06:32] |
mhoran | Hahaha. | [06:32] |
up_the_irons | saw that on the bottom of a new customer's email
(who is in Japan) | [06:32] |
mhoran | Huh. So my clock seems to be fine, but these tasks are not running 5 minutes apart. They're all over the place.
It's almost like the scheduler is not synced with the clock or something. | [06:33] |
up_the_irons | mhoran: you should do like:
*/1 * * * * root uptime erm */2 or w/e it is | [06:35] |
mhoran | Yeah. | [06:36] |
up_the_irons | so it emails you something simple
see if you get the emails on-time and at regular intervals | [06:36] |
mhoran | Got it in there now. We'll see. :) | [06:36] |
up_the_irons | cool | [06:37] |
mhoran | That's a good way to make me feel loved!
Send myself e-mail. | [06:37] |
up_the_irons | LOL | [06:37] |
mhoran | So I have this set to run every minute. It ran at 9:46, then 9:49. Skipped everything in between. Interesting.
Sep 10 09:46:20 friction /usr/sbin/cron[25023]: (mhoran) CMD (/bin/date) Sep 10 09:49:05 friction /usr/sbin/cron[25051]: (mhoran) CMD (/bin/date) Didn't even try to run it in between. | [06:50] |
up_the_irons | mhoran: did you use "*/2", i think that's every other minute | [06:53] |
mhoran | * * * * * | [06:54] |
up_the_irons | heh
let's see, on one of my Linux VMs, I have: ep 9 06:20:01 ice /USR/SBIN/CRON[12324]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null) Sep 9 06:25:02 ice /USR/SBIN/CRON[12400]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Sep 9 06:30:01 ice /USR/SBIN/CRON[12430]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null) so that's pretty much right on every 5 minutes let me find a FreeBSD one... | [06:55] |
mhoran | Yeah. I'm drifting way past the minute. Interesting. | [06:55] |
up_the_irons | Sep 10 06:35:36 freebsd-ha /usr/sbin/cron[19804]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:41:08 freebsd-ha /usr/sbin/cron[19807]: (root) CMD (/usr/libexec/atrun) Sep 10 06:46:25 freebsd-ha /usr/sbin/cron[19823]: (root) CMD (/usr/libexec/atrun) Sep 10 06:50:36 freebsd-ha /usr/sbin/cron[19826]: (root) CMD (/usr/libexec/atrun) Sep 10 06:56:08 freebsd-ha /usr/sbin/cron[19833]: (root) CMD (/usr/libexec/atrun) atrun is supposed to run every 5 minutes and look at that, cron is like being lazy about it it's "about" every 5 minutes the time delta is pretty close to 5 minutes, but it's not executing "on the dot" | [06:58] |
mhoran | Yeah. That's what's upsetting cacti.
Hrm. Thu Sep 10 09:56:52 EDT 2009 That one was almost a minute late! | [07:01] |
up_the_irons | i wonder if cron is seeing a different time | [07:02] |
mhoran | [25430] TargetTime=1252591500, sec-to-wait=36
[25430] sleeping for 36 seconds [25430] TargetTime=1252591500, sec-to-wait=-115 Interesting. | [07:11] |
..... (idle for 22mn) | ||
up_the_irons | whoa, weird | [07:33] |
toddf_ | I received two maintenance notices, identical, except for 'Message-Id:...@rails1.arpnetworks.com>' vs 'Message-ID: ..@garry-thinkpad.arpnetworks.com>nUser-Agent: Mutt/1.5.16..'
just fwiw ;-) | [07:47] |
*** | toddf_ is now known as toddf
vtoms has joined #arpnetworks | [07:51] |
up_the_irons | toddf: There was an error when sending the first one, so to be safe I sent it again from just my regular client (mutt)
toddf: looks like you got them both anyway :) absolutely time for me to expire, thankfully i slept a little already cd $bed | [07:59] |
toddf | ;-) | [08:01] |
....... (idle for 30mn) | ||
mike-burns | [mike@jack] ~% date; sleep 1; date
Thu Sep 10 11:30:44 EDT 2009 Thu Sep 10 11:30:47 EDT 2009 | [08:31] |
mhoran | 10 minutes later,
[mhoran@friction] ~% date; sleep 300; date Thu Sep 10 11:20:10 EDT 2009 On my work laptop, [mhoran@mhoran-thinkpad] ~% date; sleep 60; date Thu Sep 10 11:29:22 EDT 2009 Thu Sep 10 11:30:22 EDT 2009 So, something is up. | [08:31] |
*** | vtoms has quit IRC (Remote closed the connection) | [08:34] |
vtoms has joined #arpnetworks | [08:39] | |
mhoran | [mhoran@friction] ~% date; sleep 300; date
Thu Sep 10 11:20:10 EDT 2009 Thu Sep 10 11:41:10 EDT 2009 | [08:48] |
........ (idle for 38mn) | ||
*** | greg_dolley has joined #arpnetworks | [09:26] |
greg_dolley | hello! | [09:26] |
*** | ballen has joined #arpnetworks | [09:39] |
ballen | 30 minutes of downtime eh? | [09:39] |
mhoran | ballen: You running FreeBSD? | [09:42] |
ballen | ya | [09:42] |
mhoran | Been trying to diagnose some issues with cron, which seem to trace back to sleep(), which may even be scheduler related.
Does date; sleep 60; date act as expected for you? | [09:42] |
ballen | rgr chking | [09:43] |
mhoran | When I ran it, sleep 300 waited for took 21 minutes. | [09:43] |
ballen | up_the_irons: when you get in let me know, would like to discuss expectations of uptime, etc and so forth
mhoran: yea its all sorts of messed up [ballen@arp ~]$ date; sleep 10; date Thu Sep 10 12:46:29 EDT 2009 Thu Sep 10 12:47:09 EDT 2009 | [09:44] |
mhoran | 7.2? | [09:47] |
ballen | ya | [09:47] |
mhoran | Yeah. Something is definitely borked.
I noticed because my 5 minute Cacti cron has been complaining for months. :) | [09:47] |
ballen | does 7.2 use the new scheduler | [09:48] |
mhoran | I ran cron in debug mode and saw that it had a negative sec-to-wait. So then I tested sleep, which is exhibiting the same behavior.
Yes, it does. Not totally sure if it's scheduler related, or something else. But, something is definitely busted. | [09:48] |
ballen | yep | [09:49] |
mhoran | Hopefully up_the_irons can help us figure it out.
May need to mail the FreeBSD lists as well. Probably after work. It's crazy today. | [09:49] |
ballen | yea
just woke up from working on thesis till 4am last night hopefully no one at work misses me whats sleep use to tell time? | [09:49] |
mhoran | nanosleep() is the syscall. | [09:52] |
ballen | hmm, yea really don't feel like figure this one out at the moment. Let me know if figure out anything. My gut feeling is it has to do with KVM/Qemu
likely how nanosleep is counting time and how KVM is sharing cycles brb need coffeee | [09:55] |
.... (idle for 15mn) | ||
*** | ballen is now known as ballen|away
heavysixer has quit IRC (Read error: 145 (Connection timed out)) | [10:11] |
heavysixer has joined #arpnetworks | [10:21] | |
........ (idle for 37mn) | ||
greg_dolley has quit IRC (Read error: 110 (Connection timed out)) | [10:58] | |
ballen|away is now known as ballen
ballen has quit IRC (Remote closed the connection) | [11:03] | |
.............. (idle for 1h7mn) | ||
up_the_irons | mhoran: here's what I have from a Linux VM:
garry@ice:~ $ date; sleep 1; date Thu Sep 10 12:12:07 PDT 2009 Thu Sep 10 12:12:08 PDT 2009 garry@ice:~ $ date; sleep 20; date Thu Sep 10 12:12:15 PDT 2009 Thu Sep 10 12:12:35 PDT 2009 the host box is the same on FreeBSD it is jacked: [arpnetworks@freebsd-ha ~]$ date; sleep 1; date Thu Sep 10 12:15:37 PDT 2009 Thu Sep 10 12:15:40 PDT 2009 | [12:14] |
mhoran | Good to know. Looks like all of us on FreeBSD are experiencing this.
Do you have something that's not 7.2 (the old scheduler)? | [12:15] |
up_the_irons | mhoran: i believe I do, but it's stopped right now cuz i ran out of RAM (hence the maintenance tonight :)
w00t, OpenBSD still rocks it: | [12:16] |
mhoran | Heh. Okay. | [12:16] |
up_the_irons | s3.lax:~> date; sleep 20; date
Thu Sep 10 12:01:14 PDT 2009 Thu Sep 10 12:01:34 PDT 2009 s3.lax:~> date; sleep 1; date Thu Sep 10 12:01:39 PDT 2009 Thu Sep 10 12:01:40 PDT 2009 s3.lax:~> date; sleep 1; date Thu Sep 10 12:01:41 PDT 2009 Thu Sep 10 12:01:42 PDT 2009 | [12:16] |
mhoran | Interesting. | [12:17] |
up_the_irons | given OpenBSD is probably the least virtualized OS, and it is working, I'd have to point the finger at FreeBSD on this one, instead of KVM/QEMU. however, it probably has to do with the interaction of the two | [12:18] |
mhoran | Yeah. I did not have this problem with VMware. | [12:19] |
up_the_irons | that OpenBSD VM is on the same host too
mhoran: you tried 7.2 w/ VMware? | [12:19] |
mhoran | Ooh, this is 7.1.
I think I have a 7.2 running somewhere. 7.1/ESXi -- vps% date; sleep 20; date Thu Sep 10 15:19:15 EDT 2009 Thu Sep 10 15:19:35 EDT 2009 vps% date; sleep 1; date Thu Sep 10 15:19:38 EDT 2009 Thu Sep 10 15:19:39 EDT 2009 vps% date; sleep 1; date Thu Sep 10 15:19:40 EDT 2009 Thu Sep 10 15:19:41 EDT 2009 So that's good. | [12:20] |
up_the_irons | I'll play with it more tonight around the maintenance window; I'll have a lot of time to kill then | [12:21] |
mhoran | Yeah, 7.2/ESX is fine. Same as 7.1. | [12:22] |
up_the_irons | ah ok | [12:22] |
......... (idle for 43mn) | ||
*** | greg_dolley has joined #arpnetworks | [13:05] |
heavysixer has quit IRC () | [13:11] | |
up_the_irons | greg_dolley: welcome to IRC | [13:18] |
greg_dolley | thx :-) | [13:20] |
cablehead | greg_dolley: haha, yo greg
this is andy from revver, not sure if you remember me | [13:28] |
mhoran | up_the_irons: Do you have a machine you can test this on? Apparently adding hint.apic.0.disabled="1" may fix this. | [13:33] |
up_the_irons | mhoran: machine = FreeBSD VM?
cablehead: he must be at lunch... | [13:34] |
mhoran | up_the_irons: Yes. | [13:34] |
up_the_irons | mhoran: sure, where should I put that? in sysctl.conf? | [13:35] |
mhoran | Oh, I left out -- adding ... to /boot/loader.conf | [13:35] |
up_the_irons | ah ah | [13:35] |
cablehead | up_the_irons: either that or rocking out to some thumping metal | [13:35] |
up_the_irons | cablehead: true!
mhoran: does this look right: [arpnetworks@freebsd-ha ~]$ cat /boot/loader.conf hint.apic.0.disabled="1" [arpnetworks@freebsd-ha ~]$ | [13:35] |
mhoran | That should be it. | [13:37] |
up_the_irons | ok, rebooting... | [13:37] |
greg_dolley | cablehead: hey! I remember you ;-) | [13:45] |
jeev | man
i dont thin i'll ever get a freebsd vps | [13:50] |
mhoran | Don't say that! We'll get to the bottom of this ...
Aside from that, it works great! | [13:50] |
mike-burns | Yeah, no complaints from me. I don't do a lot of sleep-related work. | [13:51] |
jeev | noo
not cause of that when i use bsd, i use it for serious thing.. i build things by hand, or ports never packages. | [13:51] |
up_the_irons | jeev: so what's the prob? ;) with ports, you can install everything from source, that's one cool thing about it | [13:55] |
jeev | yea | [13:56] |
up_the_irons | mhoran: ok, so, had trouble with that line. it won't find the disk for some reason. I had to go into boot loader and do 'unset hint.apic.0.disabled' and then it booted fine | [13:56] |
jeev | except, vps.. = slow ;) | [13:56] |
*** | vtoms has quit IRC ("Leaving.") | [13:57] |
mhoran | Interesting. | [13:57] |
up_the_irons | mhoran: you want to play with my test VM?
i could give you login and console | [13:58] |
mhoran | Sure, probably have some time after work ...
jeev: Haven't found my VPS to be slow. At work, we run everything virtualized, and it's fine. | [13:58] |
up_the_irons | ok, just don't trash it too much, I use it for DRBD testing at the moment :) | [13:59] |
mhoran | Heh, okay. No worries. | [14:00] |
mike-burns | jeev: My ARP Networks VPS is faster than my laptop much of the time. | [14:03] |
up_the_irons | faster? w00t
up_the_irons pets his new Intel Xeon E5430 | [14:04] |
*** | visinin has quit IRC ("sleep") | [14:17] |
jeev | eh
i mean like how often can you build world. i want a peer1 LA colo for cheap. | [14:20] |
up_the_irons | jeev: peer1 ain't cheap i hear | [14:29] |
......... (idle for 44mn) | ||
jeev | yea
there was someone on wht who did colo for like 70 or somthing i forgot what bandwidth but he told me he's leaving it soon | [15:13] |
*** | ballen has joined #arpnetworks | [15:15] |
ballen | up_the_irons: you on | [15:16] |
up_the_irons | ballen: yo | [15:16] |
ballen | 30 minutes of downtime seems a bit long no
? | [15:17] |
up_the_irons | ballen: it is, but what can I do; I have RAM and a CPU to install, and if I rush it, something could break, and then the downtime would be much greater
ballen: if it was just RAM, it'd be a lot quicker | [15:17] |
ballen | no way to migrate vm's? | [15:18] |
up_the_irons | ballen: it would take longer than 30 minutes ;) a 'dd' from LVM to disk, then transfer to another server, and 'dd' from disk back to LVM <-- takes some time as well, and you're down the whole time | [15:19] |
ballen | sigh.... | [15:19] |
up_the_irons | ballen: my ultimate goal is to have the VM disk images to be on a DRBD volume, if performance turns out to be still good
ballen: i'm currently testing this | [15:20] |
Nat_UB | SAN storage...that fix it all | [15:20] |
ballen | so centrally store the images, and do diskless botting on the Qemu
booting even | [15:21] |
up_the_irons | ballen: and then it would be possible to "live" migrate and if everything goes right, there would be no downtime at all
Nat_UB: SAN storage is very expensive | [15:21] |
Nat_UB | Tell me about it...doing that at two sites at $WORK | [15:21] |
ballen | doesn't DRBL use NFS ? | [15:21] |
up_the_irons | ballen: DRBD would be "central" store (more like, two boxes get paired), and it is trivial to boot off it. That's a solved problem, but there are performance issues to account for | [15:23] |
ballen | yea I've used DRBL a year ago
to deploy a 80+ machine lab | [15:23] |
up_the_irons | ballen: DRBD is a distributed block device; not related to NFS | [15:23] |
ballen | awwww
Diskless Remote Booting Linux hah | [15:23] |
up_the_irons | LOL | [15:24] |
ballen | anywho | [15:24] |
up_the_irons | n/m
;) ballen: trust me, I feel your pain, I have several important VMs of my own that are going down (arpnetworks.com site itself, pledgie.com, my shared hosting server) | [15:24] |
ballen | whats the need for the new hardware, obviously other than increasing capacity. Couldn't just buy a new server? | [15:25] |
up_the_irons | ballen: I'm going to try to be as quick as possible; and once I certify the DRBD setup I'm testing currently, I will those who want to be on that, go on it. | [15:25] |
Nat_UB | ballen: Giving him hell huh? | [15:25] |
ballen | a little
just 30 minutes downtime suuuucks | [15:26] |
Nat_UB | :) | [15:26] |
ballen | but understandable I suppose | [15:26] |
Nat_UB | In this case I'm still building....so downtime no concern for me hehehehe
I work in the 'NO DOWNTIME' field...so I've heard all the griping before....Irons, keep up the good work! | [15:26] |
up_the_irons | ballen: the "just" in "just buy a new server" is the hard part ;) I don't buy cheap boxes, I have to shell out about $6K, and that just isn't gonna happen given I can double the cores on the current box *and* double the RAM
on the current box | [15:27] |
ballen | up_the_irons: does DRBD do synchronous writes?
up_the_irons: well you should have thought of that ahead of time :-p | [15:27] |
up_the_irons | ballen: "DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based" | [15:28] |
ballen | yes yes | [15:28] |
up_the_irons | ballen: dude, to be honest, I was like this: | [15:28] |
ballen | when you write to one side of mirror
does it wait till the other side to finish | [15:28] |
up_the_irons | "I don't know how well my new VPS offering will sell, so let's not buy both CPUs and 32 GB of RAM off the bat, start with 1 CPU and 16 GB of RAM and upgrade later" <-- now that bites me in the ass :) | [15:29] |
ballen | wondering if that is the source of performance issues | [15:29] |
up_the_irons | ballen: OK, gotcha. that part of it is configurable
ballen: I configure it to wait for the write on the other end; it has to be very consistent | [15:29] |
ballen | up_the_irons: yea I figured that was your train of thought. Just giving you a hard time
yea async without conflict resolution is a piss poor idea | [15:30] |
up_the_irons | ballen: my thinking is, i'd rather sacrifice performance than have a catastrophic failure | [15:30] |
ballen | althought
although* if you think of it in a master -> slave configuration where the slave will never write | [15:31] |
up_the_irons | ballen: where it won't matter much is in reads, cuz reads will come off the local disk; which is kinda an advantage over an external SAN setup | [15:31] |
ballen | whats wrong with doing writes asynchronously | [15:31] |
up_the_irons | ballen: well, master box crashes while writing to local disk, yet that block isn't replicated on the slave? I think that would be a problem | [15:33] |
ballen | hmm
I guess its a matter of not allowing it to get too far out of sync and allowing a certain amount of time for lost data whatever one would be confortable with | [15:33] |
up_the_irons | ballen: yeah, I think the #1 goal with DRBD is for the data to never go out of sync; but it gives you some knobs to play with | [15:35] |
ballen | so how much performance loss are you seeing using it | [15:35] |
up_the_irons | i'm not to the hard benchmark part yet; more like "play with the machine; does it feel slow?"
so far, i can't really tell the different | [15:36] |
ballen | ah | [15:36] |
up_the_irons | *difference | [15:36] |
ballen | cool, what kind of network do you have between the machines | [15:36] |
Nat_UB | He's got 10gig... :) | [15:37] |
up_the_irons | ballen: 1G | [15:37] |
ballen | just one link per box?
ah dedicated to task? | [15:37] |
up_the_irons | ballen: right now i actually have two boxes physically linked together, no switch in between | [15:38] |
ballen | ah | [15:38] |
up_the_irons | ballen: if I got more NICs, I could bond them, and I hear the network speed would be faster than disk write speed and then performance issues are mute; but I want to *see* this in action before I certify it
Nat_UB: i wish i had 10G :) the intel ones are like $2K a pop | [15:39] |
ballen | yea good plan. There may be some overhead in bonding. Does DRBD run over TCP/IP | [15:40] |
up_the_irons | ballen: yes, it does run over TCP/IP | [15:40] |
Nat_UB | Haven't tried 10g but done some bonded stuff | [15:40] |
up_the_irons | AoE (ATA over Ethernet) is another alternative, that runs on layer 2, but is pretty much feature-less and does not afford much protection to someone accidentally writing to the volume from two boxes at the same time (which will instantly corrupt it)
I use AoE for backup images only but that said, AoE is pretty cool in its simplicity | [15:41] |
ballen | yea AoE is pretty neat | [15:42] |
up_the_irons | i got this from a 'dd' test on my FreeBSD DRBD testing VM:
1048576000 bytes transferred in 76.882014 secs (13638769 bytes/sec) so like 13 MBps not all that great but those are real writes, not cached | [15:43] |
ballen | yea, whats that same benchmark on a typical FreeBSD VM | [15:44] |
up_the_irons | let me see
this is what i'm running: dd if=/dev/zero of=delete-me bs=1M count=2000 you want to make sure 'count' is about double your RAM, so caching goes away now, the performance of 'dd' will give some raw numbers that may or may not correlate with how the VM actually performance during normal use; that would depend on a lot of other factors. Even if dd has lower speeds on DRBD, the trade-off for uptime and easy of VM migration may well be worth it | [15:45] |
ballen | true
I'll be on later, and will be around during the maintance window. Let me know if you make any breakthroughs with DRBL. | [15:51] |
*** | ballen has quit IRC ("Bye!") | [15:54] |
up_the_irons | ballen: will do!
Nat_UB: thanks for the support up there, BTW ;) | [15:54] |
Nat_UB | Sure thing! U'r the man! | [15:55] |
up_the_irons | :) | [15:55] |
....... (idle for 30mn) | ||
*** | greg_dolley has quit IRC () | [16:25] |
heavysixer has joined #arpnetworks | [16:32] | |
................... (idle for 1h33mn) | ||
vtoms has joined #arpnetworks | [18:05] | |
...... (idle for 27mn) | ||
heavysixer has quit IRC ()
vtoms has quit IRC ("Leaving.") | [18:32] | |
...... (idle for 29mn) | ||
Nat_UB has quit IRC (bartol.freenode.net irc.freenode.net)
Nat_UB has joined #arpnetworks | [19:05] | |
heavysixer has joined #arpnetworks | [19:14] | |
....... (idle for 34mn) | ||
up_the_irons | we need more IRC'ers
up_the_irons goes back to editing build logs | [19:48] |
...... (idle for 25mn) | ||
*** | heavysixer has quit IRC (Read error: 104 (Connection reset by peer)) | [20:13] |
.... (idle for 17mn) | ||
jeev | build bots
;) | [20:30] |
up_the_irons | no, real people :) | [20:32] |
*** | timburke has quit IRC (Remote closed the connection)
timburke has joined #arpnetworks | [20:39] |
obsidieth | i think i might have a recruit for you | [20:56] |
.... (idle for 19mn) | ||
up_the_irons | up_the_irons does happy clappy hands
obsidieth: nice :) | [21:15] |
obsidieth: someone you know? or found on a message board?
btw, guys, let me know if there are other boards out there besides WHT that have an advertising section; I post weekly on WHT and recently found webhostingboard.net. If there are others, I'd like to know :) cd $data-center | [21:30] | |
jeev | i will advertise you for $1000/month on my google PR1 | [21:33] |
up_the_irons | jeev: LOL | [21:33] |
jeev | ;)
palm pre is so lame | [21:33] |
*** | ballen has joined #arpnetworks | [21:45] |
ballen | ping | [21:45] |
obsidieth | yeah someone i know
people on efnet arent used to servers that actually stay up:p | [21:47] |
ballen | huh | [21:48] |
......... (idle for 41mn) | ||
up_the_irons | efnet, heh
back in the day | [22:29] |
jeev | efnet sucks | [22:29] |
up_the_irons | lol, DRBD is now following you on Twitter!
"DRBD is now following you on Twitter!" that is | [22:29] |
ballen | heh
looking at espresso machines, and grinders is dropping a grand on an espresso machine + grinder a good thing? | [22:30] |
up_the_irons | wow
i'd rather buy a good 48 port GB switch for that and for a grand, even that is hard to find | [22:30] |
ballen | heh
have a good gig switch already don't use it | [22:31] |
up_the_irons | up_the_irons motions ballen to hand it over | [22:31] |
ballen | I keep my computer equipment at home very light
to keep power down | [22:32] |
up_the_irons | yeah, i don't have much at home
i keep it all at the data center :) | [22:32] |
ballen | its just a netgear | [22:32] |
jeev | shit i just bought a dell poweredge
48 port gig and i haven't even sent it to the datacenter | [22:32] |
up_the_irons | there's some new cage here where the fool has like 30 power circuits in 200 sq. ft.
i was like "wwhhhhhhhhaaaaa?" | [22:32] |
jeev | that must be me | [22:33] |
up_the_irons | LOL | [22:33] |
ballen | any idea what amperage? | [22:33] |
jeev | i was a hazard at uscolo
they would detour people around my stuff during tours | [22:33] |
up_the_irons | i hate dell management interface on their f*cking switches, but they are priced well | [22:33] |
jeev | yea i aint gonna use the management stuff
just cli | [22:33] |
ballen | http://www.netgear.com/Products/Switches/FullyManaged10_100_1000Switches/GSM7212.aspx | [22:33] |
jeev | it's some ieee standard | [22:33] |
ballen | my home switch heh | [22:33] |
up_the_irons | ballen: looks like 20 ampers each; they must be in redundant A/B pair, cuz I can't imagine they actually gave him all that power | [22:34] |
ballen | damn | [22:34] |
up_the_irons | ballen: i've heard some good things about the GSM
ironically brb | [22:34] |
ballen | k
yea its a solid switch, but I really haven't had to do much with it Grinder: http://www.visionsespresso.com/node/73 Espresso Machine: http://www.wholelattelove.com/Rancilio/ra_silvia_2009.cfm | [22:34] |
jeev | why do you need that | [22:38] |
ballen | its more of a question of why do I not need that
as well as a small coffee addiction | [22:38] |
jeev | lol | [22:39] |
ballen | I really enjoy espresso drinks, and it would save me money in the long run if I don't goto any cafe
$3 latte once a day | [22:40] |
jeev | that's the point of coffee or drinks | [22:41] |
up_the_irons | LOL | [22:41] |
jeev | to go into the place | [22:41] |
ballen | is 1095 bucks | [22:41] |
jeev | and see how bitches
hot | [22:41] |
up_the_irons | hahaha | [22:41] |
jeev | up_the_irons wishes he could get the girls from glendale! | [22:41] |
ballen | lmao | [22:41] |
jeev | well some are fugly
but some are hot | [22:41] |
up_the_irons | the chicks in glendale, yeah some are pretty hot | [22:42] |
jeev | sexUAL | [22:42] |
ballen | def some hot females at some various cafes I've been into
also cafe is like 15 minutes away and in the morning, F that! | [22:42] |
jeev | shit
sounds like you're from Charlevoix, MI when you say it's 15 min away | [22:44] |
ballen | ya | [22:44] |
jeev | wow, what a guess! | [22:45] |
ballen | hah | [22:45] |
jeev | ballen, gto a linux or bsd router on your cable ? | [22:45] |
ballen | so this is a fun thing
actually I'm behind a WISP who uses AT&T, and Charter | [22:45] |
jeev | ahh
i was gonna say sniff me some mac addresses i steal cable internet sometimes | [22:46] |
ballen | hah | [22:46] |
jeev | although i have a primary isp
i prefer stealing charter, 20 megs sometimes their network sucks | [22:46] |
up_the_irons | T - 15 minutes | [22:46] |
ironically, the time right before a maintenance window is a time where I actually wait and do nothing (I've already prepared), so now I'm just waiting for the clock to strike the right time
weird | [22:51] | |
jeev | what are you gonna do
i didn't read it | [22:51] |
ballen | yep, always annoying period of time | [22:51] |
up_the_irons | cuz I could start now, but I told everyone 11, so I must wait
jeev: RAM + CPU upgrade | [22:51] |
jeev | on everything ? | [22:52] |
up_the_irons | jeev: just one box, but it holds the majority of the guys in here | [22:52] |
jeev | is arpnetworks just one box >? :)
i dont mind really so far, stable as fark | [22:52] |
up_the_irons | jeev: no, i have several, but the one in question is my newest | [22:52] |
jeev | cool
so how is CPU split is it burst for everyone ? | [22:52] |
up_the_irons | no burst
if you ordered 1 cpu, you get 1 cpu | [22:53] |
jeev | then what
ahh how many cpu's in the box i'm on | [22:53] |
up_the_irons | some guys are running SMP, but not many
jeev: 1 CPU, quad core | [22:53] |
jeev | so i'm considered a one cpu user
since cpuinfo shows me a single cpu | [22:54] |
ballen | so I actually have a core all my to myself? | [22:54] |
jeev | so the server i'm on has 4 cores right now
so 4 users? heh | [22:55] |
up_the_irons | ok guys, T minus 1 minute
i'll get disconnected | [23:00] |
ballen | k have fun, don't break anything ;-) | [23:00] |
*** | mike-burns has quit IRC ("WeeChat 0.2.6.3") | [23:01] |
......... (idle for 40mn) | ||
[FBI] starts logging #arpnetworks at Thu Sep 10 23:41:06 2009
[FBI] has joined #arpnetworks | [23:41] | |
ballen | see I just had to swear | [23:41] |
jeev | heh
is that personal stop logging ? | [23:42] |
ballen | hmm
I'd assume so [FBI]: off | [23:43] |
*** | [FBI] has left
[FBI] starts logging #arpnetworks at Thu Sep 10 23:45:00 2009 [FBI] has joined #arpnetworks | [23:43] |
ballen | and hes back | [23:45] |
up_the_irons | w00000000000000000000000000000000t
what did I miss? I'll show you what you guys missed: total used free shared buffers cached Mem: 31856 15133 16723 0 2905 64 look at "free" :) | [23:45] |
ballen | nice | [23:45] |
jeev | up_the_irons, i cancelled while you were gone.
lol just kidding | [23:45] |
ballen | lmao
ahahahah | [23:45] |
jeev | up_the_irons, so the server im' on has only 4 cores? | [23:45] |
up_the_irons | and now this:
garry@kvr02:~$ cat /proc/cpuinfo | grep 'model name' model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz up_the_irons slaps jeev with a trout model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz jeev: now it has 8 :) [FBI]: welcome back | [23:45] |
jeev | but...
it had 4, so 4 customers? jeev pokes up_the_irons | [23:46] |
up_the_irons | jeev: no no, customers can share cores | [23:47] |
jeev | oh
so how many cores do i have just one, is it dedicated | [23:47] |
up_the_irons | jeev: there is no setting that says "this VM gets this core", although I *can* do that, just haven't. I let the Linux KVM/QEMU scheduler put the VMs on the least loaded core in real time | [23:47] |
jeev | ok | [23:48] |
up_the_irons | jeev: nobody has a dedicated core; i don't think my business model could support that. cores are the least numerous resource
RAM is easier disk is easiest | [23:48] |
jeev | yea
i dont care realy | [23:49] |
ballen | 41 minutes cutting it a bit close weren't we :-p | [23:51] |
up_the_irons | heh, i just took a video of all the blinking lights on this box, w/ my iphone
ballen: oh yeah man, I failed that one hard ballen: the RAM did not take at first, still registered 16GB, not 32 | [23:51] |
ballen | ah, tough day | [23:51] |
up_the_irons | ballen: I had to unrack box and put them in different channels | [23:51] |
cutsman | :( | [23:52] |
up_the_irons | ballen: i'm just happy everything went OK; ya never know when opening boxes and tinkering around
cutsman: whoa, who are you? :) my Nagios is all green! | [23:53] |
ballen | yea, I tend to avoid doing such things to production boxes | [23:53] |
up_the_irons | ballen: I really try not to also; but sometimes is unavoidable. This will be the last time major maintenance is done on a box before I get DRBD live migrations working; then it will be a mute point | [23:54] |
ballen | sounds good | [23:55] |
*** | obsidieth has joined #arpnetworks
cutsman has quit IRC ("leaving") | [23:55] |
obsidieth | it is i | [23:56] |
ballen | don don don | [23:56] |
up_the_irons | wonder who cutsman was
obsidieth: yo | [23:56] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |