#arpnetworks 2009-09-10,Thu

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
obsidiethheh, thats a pretty great domain visinin [01:55]
visininhaha, thanks
i tried to get fre.sh while i was at it, but no luck
[01:56]
obsidiethheh it looks like a .sh is like 100 usd [01:59]
.......... (idle for 49mn)
***toddf_ has joined #arpnetworks [02:48]
toddf has quit IRC (Read error: 110 (Connection timed out)) [03:01]
..... (idle for 21mn)
up_the_ironsforcefollow: i'm here [03:22]
.................. (idle for 1h29mn)
Thorgrim1Yawn [04:51]
mhoranup_the_irons: My VM clock is all over the place and ntpd seems not to be doing anything. Ideas? [04:56]
up_the_ironsmmm... strange
what ntpd is it exactly? openntpd?
i know openntpd will only slowly update a bad clock (so it doesn't jump)
[04:58]
mhoranAh, perhaps I'm an idiot. I would have expected an error message if the config file did not exist, but instead it was running and doing nothing!
So I changed that. Let's see if my clock syncs up now. :)
[04:59]
up_the_ironshaha
:)
[05:01]
....... (idle for 31mn)
mhoran: tried sending you a maintenance advisory (actually, to everyone):
A8566E8C59 1542 Thu Sep 10 05:24:31 garry@rails1.arpnetworks.com
(host vroom.ilikemydata.com[71.174.73.69] said: 450 4.7.1 <matt@matthoran.com>: Recipient address rejected: Greylisted for 5 minutes (in reply to RCPT TO command))
mhoran: so I hope you still get it
[05:32]
Thorgrim1Should go through when the mailserver retries to send it
We do the same thing
[05:34]
***Thorgrim1 is now known as Thorgrimr [05:34]
up_the_ironsThorgrimr: ah, gotcha, cool [05:34]
ThorgrimrThe idea being that spammers won't waste the time to come back and try again, but any decent MTA will [05:36]
mhoranDovecot killed itself when I synced my clock and Postfix got confused. Secondary MX, which does greylisting, answered because primary was down. Fun! [05:39]
mike-burnsmike-burns tries to convert maintenance window time into UTC then EDT, has to break out a calculator [05:40]
up_the_ironsmhoran: at least your secondary picked it up
mike-burns: should be something like 03:00 EDT
Thorgrimr: gotcha, interesting
[05:42]
obsidiethwhats the syntax to make unreal ircd bind to a range of ports. [05:43]
mike-burnsI found a Yahoo Answers thread that converted 11:00 PST to EDT, amusingly. [05:43]
up_the_ironsmike-burns: was it correct?
obsidieth: not sure...
[05:43]
mike-burnsYup! [05:44]
up_the_ironsnice [05:44]
obsidiethdoh.
that was easy
[05:47]
ThorgrimrNo email for me :( [05:59]
up_the_ironsThorgrimr: it's coming real soon (about 10 minutes) [06:01]
obsidiethfor the record, i could not be more pleased with how this is working so far up_the_irons [06:01]
up_the_ironsRAD
obsidieth: glad you like it :)
[06:02]
Thorgrimrup_the_irons: No worries, I'm at work now anyway, and teaching this afternoon, so no play for me [06:03]
up_the_ironsah shucks
;)
[06:03]
***vtoms has quit IRC ("Leaving.") [06:08]
up_the_ironsThorgrimr: ...and it's off! [06:12]
ThorgrimrAlrighty then :) [06:14]
mhoranup_the_irons: So this ntpd issue is related to a Cacti issue I'm trying to track down.
I thought ntpd would keep it in sync, and now that it's set up correctly, it seems to be.
[06:15]
up_the_ironsmhoran: your cacti or my cacti? :) [06:16]
mhoranHowever, my cron tasks aren't running on time at all.
My cacti.
[06:16]
up_the_ironsah [06:16]
mhoran5 minute tasks are running, sometimes, over a minute late. [06:16]
up_the_ironssounds like a cron issue [06:16]
mhoranSo my graphs are basically useless.
I figured it was because my clock wasn't synced and it was getting all confused, but it seems something else may be up.
Wondering if you've seen anything similar.
[06:16]
up_the_ironsif the time is right, but cron doesn't execute on time, suspect cron. perhaps it needs to be restarted
i've seen time drift on VMs (pretty much across the board: Xen, KVM, VMware, etc...)
but ntpd pretty much keeps it in line
i haven't had any issues with cron though, as long as time is sync'd
[06:17]
mhoranYeah. I've never seen this cron issue before. Didn't have it when running VMware, and my Xen boxes at work seem to be doing just fine.
Huh.
[06:18]
up_the_ironsmhoran: what time does your VM show? [06:19]
mhoranThu Sep 10 09:18:22 EDT 2009 [06:19]
up_the_ironsas of now, the host says:
Thu Sep 10 06:19:20 PDT 2009
i'm not sure how cron gets its time, from hardware clock, OS, or what.. probably through whatever stdlib C call provides that
[06:19]
***heavysixer has joined #arpnetworks [06:22]
mhoranOkay. I restarted cron, let's see what that does. [06:22]
up_the_ironsroger
heavysixer: how's it hangin
[06:23]
heavysixerup_the_irons: yo
just getting ready to start working on digisynd's site again.,
you?
[06:24]
up_the_ironsheavysixer: provisioning VMs [06:25]
heavysixergotcha
you are getting quite a few clients now huh?
[06:25]
mhoranNope, still screwed up. Huh. [06:25]
up_the_ironsheavysixer: it's picking up [06:25]
mhoranup_the_irons: That's quite the upgrade the server is getting! [06:25]
up_the_ironsmhoran: logs don't show anything?
mhoran: yeah, 16 GB of RAM is going in, and another quad-core Xeon @ 2.66GHz bad boy
heavysixer: haven't done Amber's VPS yet, had two orders in front of her; tell her not to hate me ;)
[06:26]
heavysixerup_the_irons: no worries we are not at the point where we are ready to deploy. [06:28]
up_the_ironsheavysixer: cool [06:28]
heavysixerup_the_irons: soon though ;-) [06:28]
mhoranup_the_irons: Will this bring it to dual quad core or quad quad core? [06:28]
heavysixerso no slacking [06:29]
up_the_ironsheavysixer: oh it's gonna be up today, for sure. :) [06:29]
heavysixerup_the_irons: cool [06:29]
up_the_ironsmhoran: dual quad [06:29]
mhoranThat's exciting. [06:30]
up_the_ironsmhoran: really interested to see how load avg goes down with the addition of more cores to distribute the load [06:30]
mhoranMy static Web site will certainly benefit from the power! [06:30]
up_the_ironsLOL [06:30]
mhoranYeah. [06:30]
up_the_ironsif the load avg drops in half, that'd be awesome utilization of the cores [06:30]
mhoranOh totally. [06:30]
up_the_ironsomg, now I know how to say "Sent from my iPhone" in Japanese
iPhoneから送信
[06:32]
mhoranHahaha. [06:32]
up_the_ironssaw that on the bottom of a new customer's email
(who is in Japan)
[06:32]
mhoranHuh. So my clock seems to be fine, but these tasks are not running 5 minutes apart. They're all over the place.
It's almost like the scheduler is not synced with the clock or something.
[06:33]
up_the_ironsmhoran: you should do like:
*/1 * * * * root uptime
erm
*/2
or w/e it is
[06:35]
mhoranYeah. [06:36]
up_the_ironsso it emails you something simple
see if you get the emails on-time
and at regular intervals
[06:36]
mhoranGot it in there now. We'll see. :) [06:36]
up_the_ironscool [06:37]
mhoranThat's a good way to make me feel loved!
Send myself e-mail.
[06:37]
up_the_ironsLOL [06:37]
mhoranSo I have this set to run every minute. It ran at 9:46, then 9:49. Skipped everything in between. Interesting.
Sep 10 09:46:20 friction /usr/sbin/cron[25023]: (mhoran) CMD (/bin/date)
Sep 10 09:49:05 friction /usr/sbin/cron[25051]: (mhoran) CMD (/bin/date)
Didn't even try to run it in between.
[06:50]
up_the_ironsmhoran: did you use "*/2", i think that's every other minute [06:53]
mhoran* * * * * [06:54]
up_the_ironsheh
let's see, on one of my Linux VMs, I have:
ep 9 06:20:01 ice /USR/SBIN/CRON[12324]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null)
Sep 9 06:25:02 ice /USR/SBIN/CRON[12400]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Sep 9 06:30:01 ice /USR/SBIN/CRON[12430]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null)
so that's pretty much right on every 5 minutes
let me find a FreeBSD one...
[06:55]
mhoranYeah. I'm drifting way past the minute. Interesting. [06:55]
up_the_ironsSep 10 06:35:36 freebsd-ha /usr/sbin/cron[19804]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:41:08 freebsd-ha /usr/sbin/cron[19807]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:46:25 freebsd-ha /usr/sbin/cron[19823]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:50:36 freebsd-ha /usr/sbin/cron[19826]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:56:08 freebsd-ha /usr/sbin/cron[19833]: (root) CMD (/usr/libexec/atrun)
atrun is supposed to run every 5 minutes
and look at that, cron is like being lazy about it
it's "about" every 5 minutes
the time delta is pretty close to 5 minutes, but it's not executing "on the dot"
[06:58]
mhoranYeah. That's what's upsetting cacti.
Hrm.
Thu Sep 10 09:56:52 EDT 2009
That one was almost a minute late!
[07:01]
up_the_ironsi wonder if cron is seeing a different time [07:02]
mhoran[25430] TargetTime=1252591500, sec-to-wait=36
[25430] sleeping for 36 seconds
[25430] TargetTime=1252591500, sec-to-wait=-115
Interesting.
[07:11]
..... (idle for 22mn)
up_the_ironswhoa, weird [07:33]
toddf_I received two maintenance notices, identical, except for 'Message-Id:...@rails1.arpnetworks.com>' vs 'Message-ID: ..@garry-thinkpad.arpnetworks.com>nUser-Agent: Mutt/1.5.16..'
just fwiw ;-)
[07:47]
***toddf_ is now known as toddf
vtoms has joined #arpnetworks
[07:51]
up_the_ironstoddf: There was an error when sending the first one, so to be safe I sent it again from just my regular client (mutt)
toddf: looks like you got them both anyway :)
absolutely time for me to expire, thankfully i slept a little already
cd $bed
[07:59]
toddf;-) [08:01]
....... (idle for 30mn)
mike-burns[mike@jack] ~% date; sleep 1; date
Thu Sep 10 11:30:44 EDT 2009
Thu Sep 10 11:30:47 EDT 2009
[08:31]
mhoran10 minutes later,
[mhoran@friction] ~% date; sleep 300; date
Thu Sep 10 11:20:10 EDT 2009
On my work laptop,
[mhoran@mhoran-thinkpad] ~% date; sleep 60; date
Thu Sep 10 11:29:22 EDT 2009
Thu Sep 10 11:30:22 EDT 2009
So, something is up.
[08:31]
***vtoms has quit IRC (Remote closed the connection) [08:34]
vtoms has joined #arpnetworks [08:39]
mhoran[mhoran@friction] ~% date; sleep 300; date
Thu Sep 10 11:20:10 EDT 2009
Thu Sep 10 11:41:10 EDT 2009
[08:48]
........ (idle for 38mn)
***greg_dolley has joined #arpnetworks [09:26]
greg_dolleyhello! [09:26]
***ballen has joined #arpnetworks [09:39]
ballen30 minutes of downtime eh? [09:39]
mhoranballen: You running FreeBSD? [09:42]
ballenya [09:42]
mhoranBeen trying to diagnose some issues with cron, which seem to trace back to sleep(), which may even be scheduler related.
Does date; sleep 60; date act as expected for you?
[09:42]
ballenrgr chking [09:43]
mhoranWhen I ran it, sleep 300 waited for took 21 minutes. [09:43]
ballenup_the_irons: when you get in let me know, would like to discuss expectations of uptime, etc and so forth
mhoran: yea its all sorts of messed up
[ballen@arp ~]$ date; sleep 10; date
Thu Sep 10 12:46:29 EDT 2009
Thu Sep 10 12:47:09 EDT 2009
[09:44]
mhoran7.2? [09:47]
ballenya [09:47]
mhoranYeah. Something is definitely borked.
I noticed because my 5 minute Cacti cron has been complaining for months. :)
[09:47]
ballendoes 7.2 use the new scheduler [09:48]
mhoranI ran cron in debug mode and saw that it had a negative sec-to-wait. So then I tested sleep, which is exhibiting the same behavior.
Yes, it does.
Not totally sure if it's scheduler related, or something else.
But, something is definitely busted.
[09:48]
ballenyep [09:49]
mhoranHopefully up_the_irons can help us figure it out.
May need to mail the FreeBSD lists as well.
Probably after work. It's crazy today.
[09:49]
ballenyea
just woke up from working on thesis till 4am last night
hopefully no one at work misses me
whats sleep use to tell time?
[09:49]
mhorannanosleep() is the syscall. [09:52]
ballenhmm, yea really don't feel like figure this one out at the moment. Let me know if figure out anything. My gut feeling is it has to do with KVM/Qemu
likely how nanosleep is counting time
and how KVM is sharing cycles
brb need coffeee
[09:55]
.... (idle for 15mn)
***ballen is now known as ballen|away
heavysixer has quit IRC (Read error: 145 (Connection timed out))
[10:11]
heavysixer has joined #arpnetworks [10:21]
........ (idle for 37mn)
greg_dolley has quit IRC (Read error: 110 (Connection timed out)) [10:58]
ballen|away is now known as ballen
ballen has quit IRC (Remote closed the connection)
[11:03]
.............. (idle for 1h7mn)
up_the_ironsmhoran: here's what I have from a Linux VM:
garry@ice:~ $ date; sleep 1; date
Thu Sep 10 12:12:07 PDT 2009
Thu Sep 10 12:12:08 PDT 2009
garry@ice:~ $ date; sleep 20; date
Thu Sep 10 12:12:15 PDT 2009
Thu Sep 10 12:12:35 PDT 2009
the host box is the same
on FreeBSD it is jacked:
[arpnetworks@freebsd-ha ~]$ date; sleep 1; date
Thu Sep 10 12:15:37 PDT 2009
Thu Sep 10 12:15:40 PDT 2009
[12:14]
mhoranGood to know. Looks like all of us on FreeBSD are experiencing this.
Do you have something that's not 7.2 (the old scheduler)?
[12:15]
up_the_ironsmhoran: i believe I do, but it's stopped right now cuz i ran out of RAM (hence the maintenance tonight :)
w00t, OpenBSD still rocks it:
[12:16]
mhoranHeh. Okay. [12:16]
up_the_ironss3.lax:~> date; sleep 20; date
Thu Sep 10 12:01:14 PDT 2009
Thu Sep 10 12:01:34 PDT 2009
s3.lax:~> date; sleep 1; date
Thu Sep 10 12:01:39 PDT 2009
Thu Sep 10 12:01:40 PDT 2009
s3.lax:~> date; sleep 1; date
Thu Sep 10 12:01:41 PDT 2009
Thu Sep 10 12:01:42 PDT 2009
[12:16]
mhoranInteresting. [12:17]
up_the_ironsgiven OpenBSD is probably the least virtualized OS, and it is working, I'd have to point the finger at FreeBSD on this one, instead of KVM/QEMU. however, it probably has to do with the interaction of the two [12:18]
mhoranYeah. I did not have this problem with VMware. [12:19]
up_the_ironsthat OpenBSD VM is on the same host too
mhoran: you tried 7.2 w/ VMware?
[12:19]
mhoranOoh, this is 7.1.
I think I have a 7.2 running somewhere.
7.1/ESXi --
vps% date; sleep 20; date
Thu Sep 10 15:19:15 EDT 2009
Thu Sep 10 15:19:35 EDT 2009
vps% date; sleep 1; date
Thu Sep 10 15:19:38 EDT 2009
Thu Sep 10 15:19:39 EDT 2009
vps% date; sleep 1; date
Thu Sep 10 15:19:40 EDT 2009
Thu Sep 10 15:19:41 EDT 2009
So that's good.
[12:20]
up_the_ironsI'll play with it more tonight around the maintenance window; I'll have a lot of time to kill then [12:21]
mhoranYeah, 7.2/ESX is fine. Same as 7.1. [12:22]
up_the_ironsah ok [12:22]
......... (idle for 43mn)
***greg_dolley has joined #arpnetworks [13:05]
heavysixer has quit IRC () [13:11]
up_the_ironsgreg_dolley: welcome to IRC [13:18]
greg_dolleythx :-) [13:20]
cableheadgreg_dolley: haha, yo greg
this is andy from revver, not sure if you remember me
[13:28]
mhoranup_the_irons: Do you have a machine you can test this on? Apparently adding hint.apic.0.disabled="1" may fix this. [13:33]
up_the_ironsmhoran: machine = FreeBSD VM?
cablehead: he must be at lunch...
[13:34]
mhoranup_the_irons: Yes. [13:34]
up_the_ironsmhoran: sure, where should I put that? in sysctl.conf? [13:35]
mhoranOh, I left out -- adding ... to /boot/loader.conf [13:35]
up_the_ironsah ah [13:35]
cableheadup_the_irons: either that or rocking out to some thumping metal [13:35]
up_the_ironscablehead: true!
mhoran: does this look right:
[arpnetworks@freebsd-ha ~]$ cat /boot/loader.conf
hint.apic.0.disabled="1"
[arpnetworks@freebsd-ha ~]$
[13:35]
mhoranThat should be it. [13:37]
up_the_ironsok, rebooting... [13:37]
greg_dolleycablehead: hey! I remember you ;-) [13:45]
jeevman
i dont thin i'll ever get a freebsd vps
[13:50]
mhoranDon't say that! We'll get to the bottom of this ...
Aside from that, it works great!
[13:50]
mike-burnsYeah, no complaints from me. I don't do a lot of sleep-related work. [13:51]
jeevnoo
not cause of that
when i use bsd, i use it for serious thing.. i build things by hand, or ports
never packages.
[13:51]
up_the_ironsjeev: so what's the prob? ;) with ports, you can install everything from source, that's one cool thing about it [13:55]
jeevyea [13:56]
up_the_ironsmhoran: ok, so, had trouble with that line. it won't find the disk for some reason. I had to go into boot loader and do 'unset hint.apic.0.disabled' and then it booted fine [13:56]
jeevexcept, vps.. = slow ;) [13:56]
***vtoms has quit IRC ("Leaving.") [13:57]
mhoranInteresting. [13:57]
up_the_ironsmhoran: you want to play with my test VM?
i could give you login and console
[13:58]
mhoranSure, probably have some time after work ...
jeev: Haven't found my VPS to be slow. At work, we run everything virtualized, and it's fine.
[13:58]
up_the_ironsok, just don't trash it too much, I use it for DRBD testing at the moment :) [13:59]
mhoranHeh, okay. No worries. [14:00]
mike-burnsjeev: My ARP Networks VPS is faster than my laptop much of the time. [14:03]
up_the_ironsfaster? w00t
up_the_irons pets his new Intel Xeon E5430
[14:04]
***visinin has quit IRC ("sleep") [14:17]
jeeveh
i mean like
how often can you build world.
i want a peer1 LA colo
for cheap.
[14:20]
up_the_ironsjeev: peer1 ain't cheap i hear [14:29]
......... (idle for 44mn)
jeevyea
there was someone on wht
who did colo for like 70 or somthing
i forgot what bandwidth
but he told me he's leaving it soon
[15:13]
***ballen has joined #arpnetworks [15:15]
ballenup_the_irons: you on [15:16]
up_the_ironsballen: yo [15:16]
ballen30 minutes of downtime seems a bit long no
?
[15:17]
up_the_ironsballen: it is, but what can I do; I have RAM and a CPU to install, and if I rush it, something could break, and then the downtime would be much greater
ballen: if it was just RAM, it'd be a lot quicker
[15:17]
ballenno way to migrate vm's? [15:18]
up_the_ironsballen: it would take longer than 30 minutes ;) a 'dd' from LVM to disk, then transfer to another server, and 'dd' from disk back to LVM <-- takes some time as well, and you're down the whole time [15:19]
ballensigh.... [15:19]
up_the_ironsballen: my ultimate goal is to have the VM disk images to be on a DRBD volume, if performance turns out to be still good
ballen: i'm currently testing this
[15:20]
Nat_UBSAN storage...that fix it all [15:20]
ballenso centrally store the images, and do diskless botting on the Qemu
booting even
[15:21]
up_the_ironsballen: and then it would be possible to "live" migrate and if everything goes right, there would be no downtime at all
Nat_UB: SAN storage is very expensive
[15:21]
Nat_UBTell me about it...doing that at two sites at $WORK [15:21]
ballendoesn't DRBL use NFS ? [15:21]
up_the_ironsballen: DRBD would be "central" store (more like, two boxes get paired), and it is trivial to boot off it. That's a solved problem, but there are performance issues to account for [15:23]
ballenyea I've used DRBL a year ago
to deploy a 80+ machine lab
[15:23]
up_the_ironsballen: DRBD is a distributed block device; not related to NFS [15:23]
ballenawwww
Diskless Remote Booting Linux
hah
[15:23]
up_the_ironsLOL [15:24]
ballenanywho [15:24]
up_the_ironsn/m
;)
ballen: trust me, I feel your pain, I have several important VMs of my own that are going down (arpnetworks.com site itself, pledgie.com, my shared hosting server)
[15:24]
ballenwhats the need for the new hardware, obviously other than increasing capacity. Couldn't just buy a new server? [15:25]
up_the_ironsballen: I'm going to try to be as quick as possible; and once I certify the DRBD setup I'm testing currently, I will those who want to be on that, go on it. [15:25]
Nat_UBballen: Giving him hell huh? [15:25]
ballena little
just 30 minutes downtime suuuucks
[15:26]
Nat_UB:) [15:26]
ballenbut understandable I suppose [15:26]
Nat_UBIn this case I'm still building....so downtime no concern for me hehehehe
I work in the 'NO DOWNTIME' field...so I've heard all the griping before....Irons, keep up the good work!
[15:26]
up_the_ironsballen: the "just" in "just buy a new server" is the hard part ;) I don't buy cheap boxes, I have to shell out about $6K, and that just isn't gonna happen given I can double the cores on the current box *and* double the RAM
on the current box
[15:27]
ballenup_the_irons: does DRBD do synchronous writes?
up_the_irons: well you should have thought of that ahead of time :-p
[15:27]
up_the_ironsballen: "DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based" [15:28]
ballenyes yes [15:28]
up_the_ironsballen: dude, to be honest, I was like this: [15:28]
ballenwhen you write to one side of mirror
does it wait till the other side to finish
[15:28]
up_the_irons"I don't know how well my new VPS offering will sell, so let's not buy both CPUs and 32 GB of RAM off the bat, start with 1 CPU and 16 GB of RAM and upgrade later" <-- now that bites me in the ass :) [15:29]
ballenwondering if that is the source of performance issues [15:29]
up_the_ironsballen: OK, gotcha. that part of it is configurable
ballen: I configure it to wait for the write on the other end; it has to be very consistent
[15:29]
ballenup_the_irons: yea I figured that was your train of thought. Just giving you a hard time
yea
async without conflict resolution is a piss poor idea
[15:30]
up_the_ironsballen: my thinking is, i'd rather sacrifice performance than have a catastrophic failure [15:30]
ballenalthought
although*
if you think of it in a master -> slave configuration
where the slave will never write
[15:31]
up_the_ironsballen: where it won't matter much is in reads, cuz reads will come off the local disk; which is kinda an advantage over an external SAN setup [15:31]
ballenwhats wrong with doing writes asynchronously [15:31]
up_the_ironsballen: well, master box crashes while writing to local disk, yet that block isn't replicated on the slave? I think that would be a problem [15:33]
ballenhmm
I guess its a matter of not allowing it to get too far out of sync
and allowing a certain amount of time for lost data
whatever one would be confortable with
[15:33]
up_the_ironsballen: yeah, I think the #1 goal with DRBD is for the data to never go out of sync; but it gives you some knobs to play with [15:35]
ballenso how much performance loss are you seeing using it [15:35]
up_the_ironsi'm not to the hard benchmark part yet; more like "play with the machine; does it feel slow?"
so far, i can't really tell the different
[15:36]
ballenah [15:36]
up_the_irons*difference [15:36]
ballencool, what kind of network do you have between the machines [15:36]
Nat_UBHe's got 10gig... :) [15:37]
up_the_ironsballen: 1G [15:37]
ballenjust one link per box?
ah
dedicated to task?
[15:37]
up_the_ironsballen: right now i actually have two boxes physically linked together, no switch in between [15:38]
ballenah [15:38]
up_the_ironsballen: if I got more NICs, I could bond them, and I hear the network speed would be faster than disk write speed and then performance issues are mute; but I want to *see* this in action before I certify it
Nat_UB: i wish i had 10G :)
the intel ones are like $2K a pop
[15:39]
ballenyea good plan. There may be some overhead in bonding. Does DRBD run over TCP/IP [15:40]
up_the_ironsballen: yes, it does run over TCP/IP [15:40]
Nat_UBHaven't tried 10g but done some bonded stuff [15:40]
up_the_ironsAoE (ATA over Ethernet) is another alternative, that runs on layer 2, but is pretty much feature-less and does not afford much protection to someone accidentally writing to the volume from two boxes at the same time (which will instantly corrupt it)
I use AoE for backup images only
but that said, AoE is pretty cool in its simplicity
[15:41]
ballenyea AoE is pretty neat [15:42]
up_the_ironsi got this from a 'dd' test on my FreeBSD DRBD testing VM:
1048576000 bytes transferred in 76.882014 secs (13638769 bytes/sec)
so like 13 MBps
not all that great
but those are real writes, not cached
[15:43]
ballenyea, whats that same benchmark on a typical FreeBSD VM [15:44]
up_the_ironslet me see
this is what i'm running:
dd if=/dev/zero of=delete-me bs=1M count=2000
you want to make sure 'count' is about double your RAM, so caching goes away
now, the performance of 'dd' will give some raw numbers that may or may not correlate with how the VM actually performance during normal use; that would depend on a lot of other factors. Even if dd has lower speeds on DRBD, the trade-off for uptime and easy of VM migration may well be worth it
[15:45]
ballentrue
I'll be on later, and will be around during the maintance window. Let me know if you make any breakthroughs with DRBL.
[15:51]
***ballen has quit IRC ("Bye!") [15:54]
up_the_ironsballen: will do!
Nat_UB: thanks for the support up there, BTW ;)
[15:54]
Nat_UBSure thing! U'r the man! [15:55]
up_the_irons:) [15:55]
....... (idle for 30mn)
***greg_dolley has quit IRC () [16:25]
heavysixer has joined #arpnetworks [16:32]
................... (idle for 1h33mn)
vtoms has joined #arpnetworks [18:05]
...... (idle for 27mn)
heavysixer has quit IRC ()
vtoms has quit IRC ("Leaving.")
[18:32]
...... (idle for 29mn)
Nat_UB has quit IRC (bartol.freenode.net irc.freenode.net)
Nat_UB has joined #arpnetworks
[19:05]
heavysixer has joined #arpnetworks [19:14]
....... (idle for 34mn)
up_the_ironswe need more IRC'ers
up_the_irons goes back to editing build logs
[19:48]
...... (idle for 25mn)
***heavysixer has quit IRC (Read error: 104 (Connection reset by peer)) [20:13]
.... (idle for 17mn)
jeevbuild bots
;)
[20:30]
up_the_ironsno, real people :) [20:32]
***timburke has quit IRC (Remote closed the connection)
timburke has joined #arpnetworks
[20:39]
obsidiethi think i might have a recruit for you [20:56]
.... (idle for 19mn)
up_the_ironsup_the_irons does happy clappy hands
obsidieth: nice :)
[21:15]
obsidieth: someone you know? or found on a message board?
btw, guys, let me know if there are other boards out there besides WHT that have an advertising section; I post weekly on WHT and recently found webhostingboard.net. If there are others, I'd like to know :)
cd $data-center
[21:30]
jeevi will advertise you for $1000/month on my google PR1 [21:33]
up_the_ironsjeev: LOL [21:33]
jeev;)
palm pre is so lame
[21:33]
***ballen has joined #arpnetworks [21:45]
ballenping [21:45]
obsidiethyeah someone i know
people on efnet arent used to servers that actually stay up:p
[21:47]
ballenhuh [21:48]
......... (idle for 41mn)
up_the_ironsefnet, heh
back in the day
[22:29]
jeevefnet sucks [22:29]
up_the_ironslol, DRBD is now following you on Twitter!
"DRBD is now following you on Twitter!"
that is
[22:29]
ballenheh
looking at espresso machines, and grinders
is dropping a grand on an espresso machine + grinder a good thing?
[22:30]
up_the_ironswow
i'd rather buy a good 48 port GB switch for that
and for a grand, even that is hard to find
[22:30]
ballenheh
have a good gig switch already
don't use it
[22:31]
up_the_ironsup_the_irons motions ballen to hand it over [22:31]
ballenI keep my computer equipment at home very light
to keep power down
[22:32]
up_the_ironsyeah, i don't have much at home
i keep it all at the data center :)
[22:32]
ballenits just a netgear [22:32]
jeevshit i just bought a dell poweredge
48 port gig and i haven't even sent it to the datacenter
[22:32]
up_the_ironsthere's some new cage here where the fool has like 30 power circuits in 200 sq. ft.
i was like "wwhhhhhhhhaaaaa?"
[22:32]
jeevthat must be me [22:33]
up_the_ironsLOL [22:33]
ballenany idea what amperage? [22:33]
jeevi was a hazard at uscolo
they would detour people around my stuff during tours
[22:33]
up_the_ironsi hate dell management interface on their f*cking switches, but they are priced well [22:33]
jeevyea i aint gonna use the management stuff
just cli
[22:33]
ballenhttp://www.netgear.com/Products/Switches/FullyManaged10_100_1000Switches/GSM7212.aspx [22:33]
jeevit's some ieee standard [22:33]
ballenmy home switch heh [22:33]
up_the_ironsballen: looks like 20 ampers each; they must be in redundant A/B pair, cuz I can't imagine they actually gave him all that power [22:34]
ballendamn [22:34]
up_the_ironsballen: i've heard some good things about the GSM
ironically
brb
[22:34]
ballenk
yea its a solid switch, but I really haven't had to do much with it
Grinder: http://www.visionsespresso.com/node/73
Espresso Machine: http://www.wholelattelove.com/Rancilio/ra_silvia_2009.cfm
[22:34]
jeevwhy do you need that [22:38]
ballenits more of a question of why do I not need that
as well as a small coffee addiction
[22:38]
jeevlol [22:39]
ballenI really enjoy espresso drinks, and it would save me money in the long run if I don't goto any cafe
$3 latte once a day
[22:40]
jeevthat's the point of coffee or drinks [22:41]
up_the_ironsLOL [22:41]
jeevto go into the place [22:41]
ballenis 1095 bucks [22:41]
jeevand see how bitches
hot
[22:41]
up_the_ironshahaha [22:41]
jeevup_the_irons wishes he could get the girls from glendale! [22:41]
ballenlmao [22:41]
jeevwell some are fugly
but some are hot
[22:41]
up_the_ironsthe chicks in glendale, yeah some are pretty hot [22:42]
jeevsexUAL [22:42]
ballendef some hot females at some various cafes I've been into
also cafe is like 15 minutes away
and in the morning, F that!
[22:42]
jeevshit
sounds like you're from Charlevoix, MI
when you say it's 15 min away
[22:44]
ballenya [22:44]
jeevwow, what a guess! [22:45]
ballenhah [22:45]
jeevballen, gto a linux or bsd router on your cable ? [22:45]
ballenso this is a fun thing
actually I'm behind a WISP
who uses AT&T, and Charter
[22:45]
jeevahh
i was gonna say
sniff me some mac addresses
i steal cable internet sometimes
[22:46]
ballenhah [22:46]
jeevalthough i have a primary isp
i prefer stealing charter, 20 megs sometimes
their network sucks
[22:46]
up_the_ironsT - 15 minutes [22:46]
ironically, the time right before a maintenance window is a time where I actually wait and do nothing (I've already prepared), so now I'm just waiting for the clock to strike the right time
weird
[22:51]
jeevwhat are you gonna do
i didn't read it
[22:51]
ballenyep, always annoying period of time [22:51]
up_the_ironscuz I could start now, but I told everyone 11, so I must wait
jeev: RAM + CPU upgrade
[22:51]
jeevon everything ? [22:52]
up_the_ironsjeev: just one box, but it holds the majority of the guys in here [22:52]
jeevis arpnetworks just one box >? :)
i dont mind really
so far, stable as fark
[22:52]
up_the_ironsjeev: no, i have several, but the one in question is my newest [22:52]
jeevcool
so how is CPU split
is it burst for everyone ?
[22:52]
up_the_ironsno burst
if you ordered 1 cpu, you get 1 cpu
[22:53]
jeevthen what
ahh
how many cpu's in the box i'm on
[22:53]
up_the_ironssome guys are running SMP, but not many
jeev: 1 CPU, quad core
[22:53]
jeevso i'm considered a one cpu user
since cpuinfo shows me a single cpu
[22:54]
ballenso I actually have a core all my to myself? [22:54]
jeevso the server i'm on has 4 cores right now
so 4 users? heh
[22:55]
up_the_ironsok guys, T minus 1 minute
i'll get disconnected
[23:00]
ballenk have fun, don't break anything ;-) [23:00]
***mike-burns has quit IRC ("WeeChat 0.2.6.3") [23:01]
......... (idle for 40mn)
[FBI] starts logging #arpnetworks at Thu Sep 10 23:41:06 2009
[FBI] has joined #arpnetworks
[23:41]
ballensee I just had to swear [23:41]
jeevheh
is that personal stop logging ?
[23:42]
ballenhmm
I'd assume so
[FBI]: off
[23:43]
***[FBI] has left
[FBI] starts logging #arpnetworks at Thu Sep 10 23:45:00 2009
[FBI] has joined #arpnetworks
[23:43]
ballenand hes back [23:45]
up_the_ironsw00000000000000000000000000000000t
what did I miss?
I'll show you what you guys missed:
total used free shared buffers cached
Mem: 31856 15133 16723 0 2905 64
look at "free" :)
[23:45]
ballennice [23:45]
jeevup_the_irons, i cancelled while you were gone.
lol just kidding
[23:45]
ballenlmao
ahahahah
[23:45]
jeevup_the_irons, so the server im' on has only 4 cores? [23:45]
up_the_ironsand now this:
garry@kvr02:~$ cat /proc/cpuinfo | grep 'model name'
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
up_the_irons slaps jeev with a trout
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
jeev: now it has 8 :)
[FBI]: welcome back
[23:45]
jeevbut...
it had 4, so 4 customers?
jeev pokes up_the_irons
[23:46]
up_the_ironsjeev: no no, customers can share cores [23:47]
jeevoh
so how many cores do i have
just one, is it dedicated
[23:47]
up_the_ironsjeev: there is no setting that says "this VM gets this core", although I *can* do that, just haven't. I let the Linux KVM/QEMU scheduler put the VMs on the least loaded core in real time [23:47]
jeevok [23:48]
up_the_ironsjeev: nobody has a dedicated core; i don't think my business model could support that. cores are the least numerous resource
RAM is easier
disk is easiest
[23:48]
jeevyea
i dont care realy
[23:49]
ballen41 minutes cutting it a bit close weren't we :-p [23:51]
up_the_ironsheh, i just took a video of all the blinking lights on this box, w/ my iphone
ballen: oh yeah man, I failed that one hard
ballen: the RAM did not take at first, still registered 16GB, not 32
[23:51]
ballenah, tough day [23:51]
up_the_ironsballen: I had to unrack box and put them in different channels [23:51]
cutsman:( [23:52]
up_the_ironsballen: i'm just happy everything went OK; ya never know when opening boxes and tinkering around
cutsman: whoa, who are you? :)
my Nagios is all green!
[23:53]
ballenyea, I tend to avoid doing such things to production boxes [23:53]
up_the_ironsballen: I really try not to also; but sometimes is unavoidable. This will be the last time major maintenance is done on a box before I get DRBD live migrations working; then it will be a mute point [23:54]
ballensounds good [23:55]
***obsidieth has joined #arpnetworks
cutsman has quit IRC ("leaving")
[23:55]
obsidiethit is i [23:56]
ballendon don don [23:56]
up_the_ironswonder who cutsman was
obsidieth: yo
[23:56]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)