visinin: haha, thanks
i tried to get fre.sh while i was at it, but no luck obsidieth: heh it looks like a .sh is like 100 usd ***: toddf_ has joined #arpnetworks
toddf has quit IRC (Read error: 110 (Connection timed out)) up_the_irons: forcefollow: i'm here Thorgrim1: Yawn mhoran: up_the_irons: My VM clock is all over the place and ntpd seems not to be doing anything. Ideas? up_the_irons: mmm... strange
what ntpd is it exactly? openntpd?
i know openntpd will only slowly update a bad clock (so it doesn't jump) mhoran: Ah, perhaps I'm an idiot. I would have expected an error message if the config file did not exist, but instead it was running and doing nothing!
So I changed that. Let's see if my clock syncs up now. :) up_the_irons: haha
:)
mhoran: tried sending you a maintenance advisory (actually, to everyone):
A8566E8C59 1542 Thu Sep 10 05:24:31 garry@rails1.arpnetworks.com
(host vroom.ilikemydata.com[71.174.73.69] said: 450 4.7.1 <matt@matthoran.com>: Recipient address rejected: Greylisted for 5 minutes (in reply to RCPT TO command))
mhoran: so I hope you still get it Thorgrim1: Should go through when the mailserver retries to send it
We do the same thing ***: Thorgrim1 is now known as Thorgrimr up_the_irons: Thorgrimr: ah, gotcha, cool Thorgrimr: The idea being that spammers won't waste the time to come back and try again, but any decent MTA will mhoran: Dovecot killed itself when I synced my clock and Postfix got confused. Secondary MX, which does greylisting, answered because primary was down. Fun! -: mike-burns tries to convert maintenance window time into UTC then EDT, has to break out a calculator up_the_irons: mhoran: at least your secondary picked it up
mike-burns: should be something like 03:00 EDT
Thorgrimr: gotcha, interesting obsidieth: whats the syntax to make unreal ircd bind to a range of ports. mike-burns: I found a Yahoo Answers thread that converted 11:00 PST to EDT, amusingly. up_the_irons: mike-burns: was it correct?
obsidieth: not sure... mike-burns: Yup! up_the_irons: nice obsidieth: doh.
that was easy Thorgrimr: No email for me :( up_the_irons: Thorgrimr: it's coming real soon (about 10 minutes) obsidieth: for the record, i could not be more pleased with how this is working so far up_the_irons up_the_irons: RAD
obsidieth: glad you like it :) Thorgrimr: up_the_irons: No worries, I'm at work now anyway, and teaching this afternoon, so no play for me up_the_irons: ah shucks
;) ***: vtoms has quit IRC ("Leaving.") up_the_irons: Thorgrimr: ...and it's off! Thorgrimr: Alrighty then :) mhoran: up_the_irons: So this ntpd issue is related to a Cacti issue I'm trying to track down.
I thought ntpd would keep it in sync, and now that it's set up correctly, it seems to be. up_the_irons: mhoran: your cacti or my cacti? :) mhoran: However, my cron tasks aren't running on time at all.
My cacti. up_the_irons: ah mhoran: 5 minute tasks are running, sometimes, over a minute late. up_the_irons: sounds like a cron issue mhoran: So my graphs are basically useless.
I figured it was because my clock wasn't synced and it was getting all confused, but it seems something else may be up.
Wondering if you've seen anything similar. up_the_irons: if the time is right, but cron doesn't execute on time, suspect cron. perhaps it needs to be restarted
i've seen time drift on VMs (pretty much across the board: Xen, KVM, VMware, etc...)
but ntpd pretty much keeps it in line
i haven't had any issues with cron though, as long as time is sync'd mhoran: Yeah. I've never seen this cron issue before. Didn't have it when running VMware, and my Xen boxes at work seem to be doing just fine.
Huh. up_the_irons: mhoran: what time does your VM show? mhoran: Thu Sep 10 09:18:22 EDT 2009 up_the_irons: as of now, the host says:
Thu Sep 10 06:19:20 PDT 2009
i'm not sure how cron gets its time, from hardware clock, OS, or what.. probably through whatever stdlib C call provides that ***: heavysixer has joined #arpnetworks mhoran: Okay. I restarted cron, let's see what that does. up_the_irons: roger
heavysixer: how's it hangin heavysixer: up_the_irons: yo
just getting ready to start working on digisynd's site again.,
you? up_the_irons: heavysixer: provisioning VMs heavysixer: gotcha
you are getting quite a few clients now huh? mhoran: Nope, still screwed up. Huh. up_the_irons: heavysixer: it's picking up mhoran: up_the_irons: That's quite the upgrade the server is getting! up_the_irons: mhoran: logs don't show anything?
mhoran: yeah, 16 GB of RAM is going in, and another quad-core Xeon @ 2.66GHz bad boy
heavysixer: haven't done Amber's VPS yet, had two orders in front of her; tell her not to hate me ;) heavysixer: up_the_irons: no worries we are not at the point where we are ready to deploy. up_the_irons: heavysixer: cool heavysixer: up_the_irons: soon though ;-) mhoran: up_the_irons: Will this bring it to dual quad core or quad quad core? heavysixer: so no slacking up_the_irons: heavysixer: oh it's gonna be up today, for sure. :) heavysixer: up_the_irons: cool up_the_irons: mhoran: dual quad mhoran: That's exciting. up_the_irons: mhoran: really interested to see how load avg goes down with the addition of more cores to distribute the load mhoran: My static Web site will certainly benefit from the power! up_the_irons: LOL mhoran: Yeah. up_the_irons: if the load avg drops in half, that'd be awesome utilization of the cores mhoran: Oh totally. up_the_irons: omg, now I know how to say "Sent from my iPhone" in Japanese
iPhoneから送信 mhoran: Hahaha. up_the_irons: saw that on the bottom of a new customer's email
(who is in Japan) mhoran: Huh. So my clock seems to be fine, but these tasks are not running 5 minutes apart. They're all over the place.
It's almost like the scheduler is not synced with the clock or something. up_the_irons: mhoran: you should do like:
*/1 * * * * root uptime
erm
*/2
or w/e it is mhoran: Yeah. up_the_irons: so it emails you something simple
see if you get the emails on-time
and at regular intervals mhoran: Got it in there now. We'll see. :) up_the_irons: cool mhoran: That's a good way to make me feel loved!
Send myself e-mail. up_the_irons: LOL mhoran: So I have this set to run every minute. It ran at 9:46, then 9:49. Skipped everything in between. Interesting.
Sep 10 09:46:20 friction /usr/sbin/cron[25023]: (mhoran) CMD (/bin/date)
Sep 10 09:49:05 friction /usr/sbin/cron[25051]: (mhoran) CMD (/bin/date)
Didn't even try to run it in between. up_the_irons: mhoran: did you use "*/2", i think that's every other minute mhoran: * * * * * up_the_irons: heh
let's see, on one of my Linux VMs, I have:
ep 9 06:20:01 ice /USR/SBIN/CRON[12324]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null)
Sep 9 06:25:02 ice /USR/SBIN/CRON[12400]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Sep 9 06:30:01 ice /USR/SBIN/CRON[12430]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null)
so that's pretty much right on every 5 minutes
let me find a FreeBSD one... mhoran: Yeah. I'm drifting way past the minute. Interesting. up_the_irons: Sep 10 06:35:36 freebsd-ha /usr/sbin/cron[19804]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:41:08 freebsd-ha /usr/sbin/cron[19807]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:46:25 freebsd-ha /usr/sbin/cron[19823]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:50:36 freebsd-ha /usr/sbin/cron[19826]: (root) CMD (/usr/libexec/atrun)
Sep 10 06:56:08 freebsd-ha /usr/sbin/cron[19833]: (root) CMD (/usr/libexec/atrun)
atrun is supposed to run every 5 minutes
and look at that, cron is like being lazy about it
it's "about" every 5 minutes
the time delta is pretty close to 5 minutes, but it's not executing "on the dot" mhoran: Yeah. That's what's upsetting cacti.
Hrm.
Thu Sep 10 09:56:52 EDT 2009
That one was almost a minute late! up_the_irons: i wonder if cron is seeing a different time mhoran: [25430] TargetTime=1252591500, sec-to-wait=36
[25430] sleeping for 36 seconds
[25430] TargetTime=1252591500, sec-to-wait=-115
Interesting. up_the_irons: whoa, weird toddf_: I received two maintenance notices, identical, except for 'Message-Id:...@rails1.arpnetworks.com>' vs 'Message-ID: ..@garry-thinkpad.arpnetworks.com>nUser-Agent: Mutt/1.5.16..'
just fwiw ;-) ***: toddf_ is now known as toddf
vtoms has joined #arpnetworks up_the_irons: toddf: There was an error when sending the first one, so to be safe I sent it again from just my regular client (mutt)
toddf: looks like you got them both anyway :)
absolutely time for me to expire, thankfully i slept a little already
cd $bed toddf: ;-) mike-burns: [mike@jack] ~% date; sleep 1; date
Thu Sep 10 11:30:44 EDT 2009
Thu Sep 10 11:30:47 EDT 2009 mhoran: 10 minutes later,
[mhoran@friction] ~% date; sleep 300; date
Thu Sep 10 11:20:10 EDT 2009
On my work laptop,
[mhoran@mhoran-thinkpad] ~% date; sleep 60; date
Thu Sep 10 11:29:22 EDT 2009
Thu Sep 10 11:30:22 EDT 2009
So, something is up. ***: vtoms has quit IRC (Remote closed the connection)
vtoms has joined #arpnetworks mhoran: [mhoran@friction] ~% date; sleep 300; date
Thu Sep 10 11:20:10 EDT 2009
Thu Sep 10 11:41:10 EDT 2009 ***: greg_dolley has joined #arpnetworks greg_dolley: hello! ***: ballen has joined #arpnetworks ballen: 30 minutes of downtime eh? mhoran: ballen: You running FreeBSD? ballen: ya mhoran: Been trying to diagnose some issues with cron, which seem to trace back to sleep(), which may even be scheduler related.
Does date; sleep 60; date act as expected for you? ballen: rgr chking mhoran: When I ran it, sleep 300 waited for took 21 minutes. ballen: up_the_irons: when you get in let me know, would like to discuss expectations of uptime, etc and so forth
mhoran: yea its all sorts of messed up
[ballen@arp ~]$ date; sleep 10; date
Thu Sep 10 12:46:29 EDT 2009
Thu Sep 10 12:47:09 EDT 2009 mhoran: 7.2? ballen: ya mhoran: Yeah. Something is definitely borked.
I noticed because my 5 minute Cacti cron has been complaining for months. :) ballen: does 7.2 use the new scheduler mhoran: I ran cron in debug mode and saw that it had a negative sec-to-wait. So then I tested sleep, which is exhibiting the same behavior.
Yes, it does.
Not totally sure if it's scheduler related, or something else.
But, something is definitely busted. ballen: yep mhoran: Hopefully up_the_irons can help us figure it out.
May need to mail the FreeBSD lists as well.
Probably after work. It's crazy today. ballen: yea
just woke up from working on thesis till 4am last night
hopefully no one at work misses me
whats sleep use to tell time? mhoran: nanosleep() is the syscall. ballen: hmm, yea really don't feel like figure this one out at the moment. Let me know if figure out anything. My gut feeling is it has to do with KVM/Qemu
likely how nanosleep is counting time
and how KVM is sharing cycles
brb need coffeee ***: ballen is now known as ballen|away
heavysixer has quit IRC (Read error: 145 (Connection timed out))
heavysixer has joined #arpnetworks
greg_dolley has quit IRC (Read error: 110 (Connection timed out))
ballen|away is now known as ballen
ballen has quit IRC (Remote closed the connection) up_the_irons: mhoran: here's what I have from a Linux VM:
garry@ice:~ $ date; sleep 1; date
Thu Sep 10 12:12:07 PDT 2009
Thu Sep 10 12:12:08 PDT 2009
garry@ice:~ $ date; sleep 20; date
Thu Sep 10 12:12:15 PDT 2009
Thu Sep 10 12:12:35 PDT 2009
the host box is the same
on FreeBSD it is jacked:
[arpnetworks@freebsd-ha ~]$ date; sleep 1; date
Thu Sep 10 12:15:37 PDT 2009
Thu Sep 10 12:15:40 PDT 2009 mhoran: Good to know. Looks like all of us on FreeBSD are experiencing this.
Do you have something that's not 7.2 (the old scheduler)? up_the_irons: mhoran: i believe I do, but it's stopped right now cuz i ran out of RAM (hence the maintenance tonight :)
w00t, OpenBSD still rocks it: mhoran: Heh. Okay. up_the_irons: s3.lax:~> date; sleep 20; date
Thu Sep 10 12:01:14 PDT 2009
Thu Sep 10 12:01:34 PDT 2009
s3.lax:~> date; sleep 1; date
Thu Sep 10 12:01:39 PDT 2009
Thu Sep 10 12:01:40 PDT 2009
s3.lax:~> date; sleep 1; date
Thu Sep 10 12:01:41 PDT 2009
Thu Sep 10 12:01:42 PDT 2009 mhoran: Interesting. up_the_irons: given OpenBSD is probably the least virtualized OS, and it is working, I'd have to point the finger at FreeBSD on this one, instead of KVM/QEMU. however, it probably has to do with the interaction of the two mhoran: Yeah. I did not have this problem with VMware. up_the_irons: that OpenBSD VM is on the same host too
mhoran: you tried 7.2 w/ VMware? mhoran: Ooh, this is 7.1.
I think I have a 7.2 running somewhere.
7.1/ESXi --
vps% date; sleep 20; date
Thu Sep 10 15:19:15 EDT 2009
Thu Sep 10 15:19:35 EDT 2009
vps% date; sleep 1; date
Thu Sep 10 15:19:38 EDT 2009
Thu Sep 10 15:19:39 EDT 2009
vps% date; sleep 1; date
Thu Sep 10 15:19:40 EDT 2009
Thu Sep 10 15:19:41 EDT 2009
So that's good. up_the_irons: I'll play with it more tonight around the maintenance window; I'll have a lot of time to kill then mhoran: Yeah, 7.2/ESX is fine. Same as 7.1. up_the_irons: ah ok ***: greg_dolley has joined #arpnetworks
heavysixer has quit IRC () up_the_irons: greg_dolley: welcome to IRC greg_dolley: thx :-) cablehead: greg_dolley: haha, yo greg
this is andy from revver, not sure if you remember me mhoran: up_the_irons: Do you have a machine you can test this on? Apparently adding hint.apic.0.disabled="1" may fix this. up_the_irons: mhoran: machine = FreeBSD VM?
cablehead: he must be at lunch... mhoran: up_the_irons: Yes. up_the_irons: mhoran: sure, where should I put that? in sysctl.conf? mhoran: Oh, I left out -- adding ... to /boot/loader.conf up_the_irons: ah ah cablehead: up_the_irons: either that or rocking out to some thumping metal up_the_irons: cablehead: true!
mhoran: does this look right:
[arpnetworks@freebsd-ha ~]$ cat /boot/loader.conf
hint.apic.0.disabled="1"
[arpnetworks@freebsd-ha ~]$ mhoran: That should be it. up_the_irons: ok, rebooting... greg_dolley: cablehead: hey! I remember you ;-) jeev: man
i dont thin i'll ever get a freebsd vps mhoran: Don't say that! We'll get to the bottom of this ...
Aside from that, it works great! mike-burns: Yeah, no complaints from me. I don't do a lot of sleep-related work. jeev: noo
not cause of that
when i use bsd, i use it for serious thing.. i build things by hand, or ports
never packages. up_the_irons: jeev: so what's the prob? ;) with ports, you can install everything from source, that's one cool thing about it jeev: yea up_the_irons: mhoran: ok, so, had trouble with that line. it won't find the disk for some reason. I had to go into boot loader and do 'unset hint.apic.0.disabled' and then it booted fine jeev: except, vps.. = slow ;) ***: vtoms has quit IRC ("Leaving.") mhoran: Interesting. up_the_irons: mhoran: you want to play with my test VM?
i could give you login and console mhoran: Sure, probably have some time after work ...
jeev: Haven't found my VPS to be slow. At work, we run everything virtualized, and it's fine. up_the_irons: ok, just don't trash it too much, I use it for DRBD testing at the moment :) mhoran: Heh, okay. No worries. mike-burns: jeev: My ARP Networks VPS is faster than my laptop much of the time. up_the_irons: faster? w00t -: up_the_irons pets his new Intel Xeon E5430 ***: visinin has quit IRC ("sleep") jeev: eh
i mean like
how often can you build world.
i want a peer1 LA colo
for cheap. up_the_irons: jeev: peer1 ain't cheap i hear jeev: yea
there was someone on wht
who did colo for like 70 or somthing
i forgot what bandwidth
but he told me he's leaving it soon ***: ballen has joined #arpnetworks ballen: up_the_irons: you on up_the_irons: ballen: yo ballen: 30 minutes of downtime seems a bit long no
? up_the_irons: ballen: it is, but what can I do; I have RAM and a CPU to install, and if I rush it, something could break, and then the downtime would be much greater
ballen: if it was just RAM, it'd be a lot quicker ballen: no way to migrate vm's? up_the_irons: ballen: it would take longer than 30 minutes ;) a 'dd' from LVM to disk, then transfer to another server, and 'dd' from disk back to LVM <-- takes some time as well, and you're down the whole time ballen: sigh.... up_the_irons: ballen: my ultimate goal is to have the VM disk images to be on a DRBD volume, if performance turns out to be still good
ballen: i'm currently testing this Nat_UB: SAN storage...that fix it all ballen: so centrally store the images, and do diskless botting on the Qemu
booting even up_the_irons: ballen: and then it would be possible to "live" migrate and if everything goes right, there would be no downtime at all
Nat_UB: SAN storage is very expensive Nat_UB: Tell me about it...doing that at two sites at $WORK ballen: doesn't DRBL use NFS ? up_the_irons: ballen: DRBD would be "central" store (more like, two boxes get paired), and it is trivial to boot off it. That's a solved problem, but there are performance issues to account for ballen: yea I've used DRBL a year ago
to deploy a 80+ machine lab up_the_irons: ballen: DRBD is a distributed block device; not related to NFS ballen: awwww
Diskless Remote Booting Linux
hah up_the_irons: LOL ballen: anywho up_the_irons: n/m
;)
ballen: trust me, I feel your pain, I have several important VMs of my own that are going down (arpnetworks.com site itself, pledgie.com, my shared hosting server) ballen: whats the need for the new hardware, obviously other than increasing capacity. Couldn't just buy a new server? up_the_irons: ballen: I'm going to try to be as quick as possible; and once I certify the DRBD setup I'm testing currently, I will those who want to be on that, go on it. Nat_UB: ballen: Giving him hell huh? ballen: a little
just 30 minutes downtime suuuucks Nat_UB: :) ballen: but understandable I suppose Nat_UB: In this case I'm still building....so downtime no concern for me hehehehe
I work in the 'NO DOWNTIME' field...so I've heard all the griping before....Irons, keep up the good work! up_the_irons: ballen: the "just" in "just buy a new server" is the hard part ;) I don't buy cheap boxes, I have to shell out about $6K, and that just isn't gonna happen given I can double the cores on the current box *and* double the RAM
on the current box ballen: up_the_irons: does DRBD do synchronous writes?
up_the_irons: well you should have thought of that ahead of time :-p up_the_irons: ballen: "DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based" ballen: yes yes up_the_irons: ballen: dude, to be honest, I was like this: ballen: when you write to one side of mirror
does it wait till the other side to finish up_the_irons: "I don't know how well my new VPS offering will sell, so let's not buy both CPUs and 32 GB of RAM off the bat, start with 1 CPU and 16 GB of RAM and upgrade later" <-- now that bites me in the ass :) ballen: wondering if that is the source of performance issues up_the_irons: ballen: OK, gotcha. that part of it is configurable
ballen: I configure it to wait for the write on the other end; it has to be very consistent ballen: up_the_irons: yea I figured that was your train of thought. Just giving you a hard time
yea
async without conflict resolution is a piss poor idea up_the_irons: ballen: my thinking is, i'd rather sacrifice performance than have a catastrophic failure ballen: althought
although*
if you think of it in a master -> slave configuration
where the slave will never write up_the_irons: ballen: where it won't matter much is in reads, cuz reads will come off the local disk; which is kinda an advantage over an external SAN setup ballen: whats wrong with doing writes asynchronously up_the_irons: ballen: well, master box crashes while writing to local disk, yet that block isn't replicated on the slave? I think that would be a problem ballen: hmm
I guess its a matter of not allowing it to get too far out of sync
and allowing a certain amount of time for lost data
whatever one would be confortable with up_the_irons: ballen: yeah, I think the #1 goal with DRBD is for the data to never go out of sync; but it gives you some knobs to play with ballen: so how much performance loss are you seeing using it up_the_irons: i'm not to the hard benchmark part yet; more like "play with the machine; does it feel slow?"
so far, i can't really tell the different ballen: ah up_the_irons: *difference ballen: cool, what kind of network do you have between the machines Nat_UB: He's got 10gig... :) up_the_irons: ballen: 1G ballen: just one link per box?
ah
dedicated to task? up_the_irons: ballen: right now i actually have two boxes physically linked together, no switch in between ballen: ah up_the_irons: ballen: if I got more NICs, I could bond them, and I hear the network speed would be faster than disk write speed and then performance issues are mute; but I want to *see* this in action before I certify it
Nat_UB: i wish i had 10G :)
the intel ones are like $2K a pop ballen: yea good plan. There may be some overhead in bonding. Does DRBD run over TCP/IP up_the_irons: ballen: yes, it does run over TCP/IP Nat_UB: Haven't tried 10g but done some bonded stuff up_the_irons: AoE (ATA over Ethernet) is another alternative, that runs on layer 2, but is pretty much feature-less and does not afford much protection to someone accidentally writing to the volume from two boxes at the same time (which will instantly corrupt it)
I use AoE for backup images only
but that said, AoE is pretty cool in its simplicity ballen: yea AoE is pretty neat up_the_irons: i got this from a 'dd' test on my FreeBSD DRBD testing VM:
1048576000 bytes transferred in 76.882014 secs (13638769 bytes/sec)
so like 13 MBps
not all that great
but those are real writes, not cached ballen: yea, whats that same benchmark on a typical FreeBSD VM up_the_irons: let me see
this is what i'm running:
dd if=/dev/zero of=delete-me bs=1M count=2000
you want to make sure 'count' is about double your RAM, so caching goes away
now, the performance of 'dd' will give some raw numbers that may or may not correlate with how the VM actually performance during normal use; that would depend on a lot of other factors. Even if dd has lower speeds on DRBD, the trade-off for uptime and easy of VM migration may well be worth it ballen: true
I'll be on later, and will be around during the maintance window. Let me know if you make any breakthroughs with DRBL. ***: ballen has quit IRC ("Bye!") up_the_irons: ballen: will do!
Nat_UB: thanks for the support up there, BTW ;) Nat_UB: Sure thing! U'r the man! up_the_irons: :) ***: greg_dolley has quit IRC ()
heavysixer has joined #arpnetworks
vtoms has joined #arpnetworks
heavysixer has quit IRC ()
vtoms has quit IRC ("Leaving.")
Nat_UB has quit IRC (bartol.freenode.net irc.freenode.net)
Nat_UB has joined #arpnetworks
heavysixer has joined #arpnetworks up_the_irons: we need more IRC'ers -: up_the_irons goes back to editing build logs ***: heavysixer has quit IRC (Read error: 104 (Connection reset by peer)) jeev: build bots
;) up_the_irons: no, real people :) ***: timburke has quit IRC (Remote closed the connection)
timburke has joined #arpnetworks obsidieth: i think i might have a recruit for you -: up_the_irons does happy clappy hands up_the_irons: obsidieth: nice :)
obsidieth: someone you know? or found on a message board?
btw, guys, let me know if there are other boards out there besides WHT that have an advertising section; I post weekly on WHT and recently found webhostingboard.net. If there are others, I'd like to know :)
cd $data-center jeev: i will advertise you for $1000/month on my google PR1 up_the_irons: jeev: LOL jeev: ;)
palm pre is so lame ***: ballen has joined #arpnetworks ballen: ping obsidieth: yeah someone i know
people on efnet arent used to servers that actually stay up:p ballen: huh up_the_irons: efnet, heh
back in the day jeev: efnet sucks up_the_irons: lol, DRBD is now following you on Twitter!
"DRBD is now following you on Twitter!"
that is ballen: heh
looking at espresso machines, and grinders
is dropping a grand on an espresso machine + grinder a good thing? up_the_irons: wow
i'd rather buy a good 48 port GB switch for that
and for a grand, even that is hard to find ballen: heh
have a good gig switch already
don't use it -: up_the_irons motions ballen to hand it over ballen: I keep my computer equipment at home very light
to keep power down up_the_irons: yeah, i don't have much at home
i keep it all at the data center :) ballen: its just a netgear jeev: shit i just bought a dell poweredge
48 port gig and i haven't even sent it to the datacenter up_the_irons: there's some new cage here where the fool has like 30 power circuits in 200 sq. ft.
i was like "wwhhhhhhhhaaaaa?" jeev: that must be me up_the_irons: LOL ballen: any idea what amperage? jeev: i was a hazard at uscolo
they would detour people around my stuff during tours up_the_irons: i hate dell management interface on their f*cking switches, but they are priced well jeev: yea i aint gonna use the management stuff
just cli ballen: http://www.netgear.com/Products/Switches/FullyManaged10_100_1000Switches/GSM7212.aspx jeev: it's some ieee standard ballen: my home switch heh up_the_irons: ballen: looks like 20 ampers each; they must be in redundant A/B pair, cuz I can't imagine they actually gave him all that power ballen: damn up_the_irons: ballen: i've heard some good things about the GSM
ironically
brb ballen: k
yea its a solid switch, but I really haven't had to do much with it
Grinder: http://www.visionsespresso.com/node/73
Espresso Machine: http://www.wholelattelove.com/Rancilio/ra_silvia_2009.cfm jeev: why do you need that ballen: its more of a question of why do I not need that
as well as a small coffee addiction jeev: lol ballen: I really enjoy espresso drinks, and it would save me money in the long run if I don't goto any cafe
$3 latte once a day jeev: that's the point of coffee or drinks up_the_irons: LOL jeev: to go into the place ballen: is 1095 bucks jeev: and see how bitches
hot up_the_irons: hahaha jeev: up_the_irons wishes he could get the girls from glendale! ballen: lmao jeev: well some are fugly
but some are hot up_the_irons: the chicks in glendale, yeah some are pretty hot jeev: sexUAL ballen: def some hot females at some various cafes I've been into
also cafe is like 15 minutes away
and in the morning, F that! jeev: shit
sounds like you're from Charlevoix, MI
when you say it's 15 min away ballen: ya jeev: wow, what a guess! ballen: hah jeev: ballen, gto a linux or bsd router on your cable ? ballen: so this is a fun thing
actually I'm behind a WISP
who uses AT&T, and Charter jeev: ahh
i was gonna say
sniff me some mac addresses
i steal cable internet sometimes ballen: hah jeev: although i have a primary isp
i prefer stealing charter, 20 megs sometimes
their network sucks up_the_irons: T - 15 minutes
ironically, the time right before a maintenance window is a time where I actually wait and do nothing (I've already prepared), so now I'm just waiting for the clock to strike the right time
weird jeev: what are you gonna do
i didn't read it ballen: yep, always annoying period of time up_the_irons: cuz I could start now, but I told everyone 11, so I must wait
jeev: RAM + CPU upgrade jeev: on everything ? up_the_irons: jeev: just one box, but it holds the majority of the guys in here jeev: is arpnetworks just one box >? :)
i dont mind really
so far, stable as fark up_the_irons: jeev: no, i have several, but the one in question is my newest jeev: cool
so how is CPU split
is it burst for everyone ? up_the_irons: no burst
if you ordered 1 cpu, you get 1 cpu jeev: then what
ahh
how many cpu's in the box i'm on up_the_irons: some guys are running SMP, but not many
jeev: 1 CPU, quad core jeev: so i'm considered a one cpu user
since cpuinfo shows me a single cpu ballen: so I actually have a core all my to myself? jeev: so the server i'm on has 4 cores right now
so 4 users? heh up_the_irons: ok guys, T minus 1 minute
i'll get disconnected ballen: k have fun, don't break anything ;-) ***: mike-burns has quit IRC ("WeeChat 0.2.6.3")
[FBI] starts logging #arpnetworks at Thu Sep 10 23:41:06 2009
[FBI] has joined #arpnetworks ballen: see I just had to swear jeev: heh
is that personal stop logging ? ballen: hmm
I'd assume so
[FBI]: off ***: [FBI] has left
[FBI] starts logging #arpnetworks at Thu Sep 10 23:45:00 2009
[FBI] has joined #arpnetworks ballen: and hes back up_the_irons: w00000000000000000000000000000000t
what did I miss?
I'll show you what you guys missed:
total used free shared buffers cached
Mem: 31856 15133 16723 0 2905 64
look at "free" :) ballen: nice jeev: up_the_irons, i cancelled while you were gone.
lol just kidding ballen: lmao
ahahahah jeev: up_the_irons, so the server im' on has only 4 cores? up_the_irons: and now this:
garry@kvr02:~$ cat /proc/cpuinfo | grep 'model name'
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz -: up_the_irons slaps jeev with a trout up_the_irons: model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
jeev: now it has 8 :)
[FBI]: welcome back jeev: but...
it had 4, so 4 customers? -: jeev pokes up_the_irons up_the_irons: jeev: no no, customers can share cores jeev: oh
so how many cores do i have
just one, is it dedicated up_the_irons: jeev: there is no setting that says "this VM gets this core", although I *can* do that, just haven't. I let the Linux KVM/QEMU scheduler put the VMs on the least loaded core in real time jeev: ok up_the_irons: jeev: nobody has a dedicated core; i don't think my business model could support that. cores are the least numerous resource
RAM is easier
disk is easiest jeev: yea
i dont care realy ballen: 41 minutes cutting it a bit close weren't we :-p up_the_irons: heh, i just took a video of all the blinking lights on this box, w/ my iphone
ballen: oh yeah man, I failed that one hard
ballen: the RAM did not take at first, still registered 16GB, not 32 ballen: ah, tough day up_the_irons: ballen: I had to unrack box and put them in different channels cutsman: :( up_the_irons: ballen: i'm just happy everything went OK; ya never know when opening boxes and tinkering around
cutsman: whoa, who are you? :)
my Nagios is all green! ballen: yea, I tend to avoid doing such things to production boxes up_the_irons: ballen: I really try not to also; but sometimes is unavoidable. This will be the last time major maintenance is done on a box before I get DRBD live migrations working; then it will be a mute point ballen: sounds good ***: obsidieth has joined #arpnetworks
cutsman has quit IRC ("leaving") obsidieth: it is i ballen: don don don up_the_irons: wonder who cutsman was
obsidieth: yo