[00:10] *** koan has quit IRC (Quit: Changing server) [00:40] *** alexstanford15 has quit IRC (Ping timeout: 248 seconds) [00:41] *** alexstanford15 has joined #arpnetworks [00:48] *** andol has quit IRC (Quit: leaving) [00:57] *** andol has joined #arpnetworks [01:11] *** alexstanford15 has quit IRC (Ping timeout: 248 seconds) [01:12] *** alexstanford15 has joined #arpnetworks [04:09] *** koan has joined #arpnetworks [07:12] *** dzup has joined #arpnetworks [10:48] *** hazardous has joined #arpnetworks [11:10] *** tabthorpe has quit IRC (Remote host closed the connection) [11:29] *** dzup has quit IRC (Remote host closed the connection) [12:01] up_the_irons: any chance you got specials running on dedis? possible to swap out 1x1tb for 2x500gb or something? [12:23] hazardous: he is rather accomodating on the vps side, so if you propose something logistically possible and economically sensible, you've got the openings of a good negotiation ;-) [12:24] 'logistically possible' is likely going to equate to his hw inventory availability ;-) [12:24] dedicated side, not vps though [12:24] it's for a project i'm working on that will involve a significant amount of cpu pounding [12:25] that i'm not sure vps hosts are typically ok with [12:26] well what I mean to say is, I've made some even steven deals on the vps side (less mem more disk, etc) and it was accepted. I've not experienced the dedicated side, but since you're dealing with the same person, I'm willing to bet if you propose something that's sensible, he'll listen ;-) [12:26] vps hosts are not overprovisioned either in cpu nor in disk [12:27] so you can check, but the only time I've been aware of someone causing issues to other vps systems is if they are hitting the disk io 'unfairly' (there is not a 'provisioning technique' for disk io that is in use, newer qemu might be able to limit io speed, but I kindof like the responsiveness as it is ..) [12:28] I say 'newer qemu' since kvm and qemu are basically one and the same these days [12:29] heh [12:29] yeah i am not going to be pounding disk much [12:29] but i am using it as a development/staging system [12:29] and code will occasionally run amok, memory leak, cpu leak [12:29] and i feel that i would potentially be a bad neighbour to vm's so :< [12:30] I have used my vps originally as a development platform that eventually turned production, its cpu is faster than anything I have here in the office, and it does a crashdump and reboot faster than anything I own too [12:31] if you are swapping to death, probably; short of that likely not, but hey, if your budget permits bare metal, you could install e.g. proxmox.com and have your own vps instances for annoying each other inside your big shell [12:38] hazardous: i generally don't have specials on the dedis because they are already a really good deal for the specs. check competition. [12:38] hazardous: are you preferring 2x500gb for mirroring or something? [12:38] up_the_irons: yeah i just want sw raid1 [12:39] i refuse to run single disk as a personal issue [12:39] disk space is not a matter to me, but just at least having something resembling redundancy [12:39] hazardous: what plan are you looking at? [12:39] lowest one, but that's already overkill for my needs [12:39] wouldn't be using anywhere near the bw cap too [12:39] hazardous: so where do you want the price point to be? [12:40] what can you do for about a hundredish [12:41] hazardous: the thing about disks is, look at the price difference between a 500GB and 1TB disk, it's like $10. so sometimes people want to pay less for smaller disk but they don't realize the cost is nearly identical. [12:41] ah [12:41] *** medum_ has quit IRC (Remote host closed the connection) [12:41] hm, what are the chances you can offer vps with dedicated cores (don't know if qemu does that) [12:43] hazardous: we don't offer that; a vps is a shared service. you want dedicated cores, get a dedicated server :) [12:47] *** tabthorpe has joined #arpnetworks [12:49] wow, that is a lot of bw on dedicated servers. [12:49] weird it costs $129 for 8 gb server, and $149 got 16gb server, but $10 for 8gb ram [13:13] *** phrac has joined #arpnetworks [13:37] Sold one dedicated server, and another coming in shortly. Dang, good day for dedicateds [13:40] ;) [13:41] heh i wouldn't mind a dedicated server, but i'm nowhere naer to being able to afford [13:41] phrac you seem familiar but i can't place it [13:42] oh. synirc [13:54] hazardous: I don't remember what channel I was on in synirc [13:58] ahh [13:58] newznab probably [14:16] *** medum has joined #arpnetworks [15:00] *** HighJinx has quit IRC (Ping timeout: 256 seconds) [15:00] *** HighJinx has joined #arpnetworks [15:01] *** hive-mind has quit IRC (Ping timeout: 256 seconds) [15:03] *** hive-mind has joined #arpnetworks [15:18] *** Ehtyar has joined #arpnetworks [15:31] hmm. upgraded to openbsd 5.2 and network is broken. em0: watchdog timeout -- resetting [15:31] anyone seen that before? [15:33] yip [15:33] how frequent? [15:33] shows up again every couple minutes and won't respond to anything [15:34] oh [15:34] it's nothing liek that often for me [15:34] i did uhjh omsething [15:34] is that 32 bit or 64bit? [15:34] clock: unknown CMOS layout [15:34] em0: watchdog timeout -- resetting [15:34] em0: watchdog timeout -- resetting [15:35] i've had it twice since bootup [15:35] 12:35PM up 50 days, 18:05, 2 users, load averages: 0.27, 0.14, 0.10 [15:35] hmm i have 5.2-current [15:35] amd64. 5.1 worked great no issues. can't even ping 5.2 [15:36] oh [15:36] it doesn't work at all [15:36] does em0 show the ip? [15:36] yea [15:36] try a kernel from 5.3's beta thingy? [15:36] see if it behaves [15:37] well booting 5.2 bsd.rd i did an ftp install [15:37] ftp://ftp.openbsd.org/pub/OpenBSD/snapshots/amd64/bsd [15:37] hmm [15:37] finished and rebooted, now its dead [15:37] and that worked at that point [15:37] 'disable mpbios' is required unless you're on kvr27 or higher [15:37] hmm [15:37] i've been usign custom kernels [15:37] so yeh mpbios may be disabled for me anyway [15:37] toddf: i'll give that a try [15:37] mpbios at bios0 function 0x0 not configured [15:37] does that mean it's disabled? [15:38] yes [15:38] when's 5.3 out? [15:38] oh may [15:38] that's ages away [15:39] traditionally 1st part of may. look at the previous release schedule, its mostly like clockwork, except the official release tends to coincide with a few days after cdrom's have arrived in the prepurchased peoples hands.. [15:39] depends, if you're familiar with the release cycle, you'll note things are in 5.3-beta now, find bugs now or wait till 5.4 to have them fixed in an official release basically [15:39] at work using 5.0 on something [15:39] actually 4.8 on something too i think [15:40] i've only had this em0 issue [15:40] i think i reduced my timeout [15:40] before reset [15:40] but i assume it's a qemu issue rather than openbsd [15:41] works with mpbios disabled, thanks [15:42] todd: do you know anything about the watchdog thing? [15:43] it may be that it doesn't realyl ened to reset it [15:44] worst case 'ifconfig em0 down up' but if the nic works, ignore random warnings like that. if it effects your performance, worry .. ;-) [15:44] heh [15:44] it used to [15:44] i really should diff to see what i did [15:44] in other news, current and 5.3 will come with virtio support [15:44] it used to happen more often that it does now [15:45] toddf: yah, that's why was wondering about 5.3 :) [15:45] my 5.2-current has virtio but i'm on old node [15:45] kvr27 and higher have stable virtio nic/disk/memory baloon support [15:45] new nodes work with smp too [15:46] sd0 at scsibus2 targ 0 lun 0: SCSI3 0/direct fixed [15:46] sd0: 81920MB, 512 bytes/sector, 167772160 sectors [15:46] toddf: oh you're on new node? [15:46] virtio0 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00: Virtio Network Device [15:46] vio0 at virtio0: address 52:54:00:ee:e1:22 [15:46] virtio0: apic 1 int 11 [15:46] virtio1 at pci0 dev 4 function 0 "Qumranet Virtio Storage" rev 0x00: Virtio Block Device [15:47] vioblk0 at virtio1 [15:47] virtio1: apic 1 int 11 [15:47] oh yeah, have one vps on kvr27 and another on kvr28, when I'm not busy and have some other things taken care of, I'll see if I can move my other vps'en to newer kvr systems *grin* [15:47] time to skoot, kids and evening routine are ready to steal my keyboard! [15:54] my gawd, 12 servers in 3U! -- http://www.supermicro.com/products/system/3U/5037/SYS-5037MC-H12TRF.cfm [15:54] they even support 4x disks (2.5") per server with an add-on tray [15:54] up_the_irons: ssd's :) [15:55] up_the_irons: dell have had something like that for years [15:55] oh i think it was 4 servers in 2u [15:55] 12 in 3u is pretty crazy [15:55] the problem is that you can't really feed enough power [15:55] it's all very nice having high density servers, but you need enough power/cooling for the rack still [15:56] mercutio: yeah, but where i'm located i can already have 60A per cabinet [15:57] how many amps do those servers use? [15:57] 60a seems pretty high, but that's 110v i imagine :) [15:57] so that's like less than 30 amp 240v [15:58] yeah 60a at 120v [15:59] i imagien you'll still hit your limit :) [15:59] is that 30amps * 2 ? [16:00] cos the other thing is if you want to use all 60 amps you lose redundant power i imagine? [16:00] yeah for sure you'll hit the limit, but you can get even more power per rack if you need [16:00] no, i have 60A per rack *with* redundant power [16:00] oh cool [16:00] (well, one of the racks is 60A, not all of 'em) [16:00] so 60a * 2 [16:02] how do prices compare to 1u boxes? [16:05] it looks like that is sata not sas [16:06] i wonder what enterprise 2.5" hard-disks are around that are sata not sas [16:08] ebay seems to think $5000 for chassis with 8 motherboards [16:08] then need cpu, ram, hard-disk [16:09] http://www.ebay.com/itm/DELL-POWEREDGE-C6100-XS23-TY3-SERVER-8x-L5520-QUAD-CORE-CPUS-96GB-MEM-4x-TRAYS-/170971630149?_trksid=p2047675.m2109&_trkparms=aid%3D555003%26algo%3DPW.CAT%26ao%3D1%26asc%3D142%26meid%3D5714756017857345721%26pid%3D100010%26prg%3D1076%26rk%3D4%26sd%3D320946716752%26 [16:09] erk long [16:09] http://tinyurl.com/abaszwe [16:09] it supports ss [16:09] versus c6100 is much cheaper :) [16:09] *sas [16:10] get one of those c6100 and sell cheap dedicated maybe? :) [16:10] that's used though, right? [16:10] yeh [16:10] it's older too [16:10] e5520 cpus [16:11] err it's actually l5520 [16:11] a little better [16:11] still l5520 is fast enough for a lot of things [16:11] for that particular one, it's only 4x nodes right? [16:11] where would the other 4x go?? ;) [16:11] is it a different tray type if u want 8x? I know jack about dell... [16:11] i dunno [16:11] it depends how much you pay for rack space [16:12] the dell is older one only does 4 servers [16:12] but it's only 2u [16:12] it's still twice the density of 1u servers [16:12] and takes cheaper 3.5" hard-disks [16:12] it seems to only come with 4 disk trays though [16:12] and one power supply [16:13] yeah noticed the one ps [16:13] all in all, not a bad deal [16:14] depends what requirements are [16:14] like that is probably simlar speed to e3-1240v2 [16:14] but way older [16:14] yeah [16:14] and 1100w ps [16:14] 8 real cores versus 4 real cores [16:14] *** dzup has joined #arpnetworks [16:15] but like half the performance per core [16:15] while with 1620w ps with supermicro e3's, you get 8x servers [16:15] yeah but psu doesn't mean everything [16:15] it doesn't really tell you how much power it uses [16:15] but it tells u the max power [16:15] and supermicro is like way more expensive [16:15] it's not even max power really [16:15] not if you compare new vs. new [16:15] often servers only max out at half their psu wattage [16:16] hmm [16:16] but nevertheless, it obviously will never be more than 1620 W [16:16] well yes [16:16] that's actually quite low [16:16] e3 cpus are pretty low power [16:16] i suppose i was just thinking about how hazardous was saying that he wanted cheap dedicated [16:17] and there's a fair amount of people who want cheap dedicated rather than newest greatest dedicated [16:17] but i suppose you also have to weigh up cost of power etc [16:17] and rack space [16:17] yeah, the thing is, no matter how much the unit is on ebay, if it sucks a lot of power, you can never really go "cheap" dedicated [16:17] and chance of problems [16:18] so it is easier to go cheap dedicated on really power efficient stuff [16:18] yeah small investment great long term cost versus large investment low long term cost [16:18] cuz you can pack it in the rack [16:18] i imagine your power / rack space isn't amazingly cheap [16:18] nope [16:19] probably cheaper in places like kansas [16:19] maybe i should make an "ARP Lite(tm)" service and put boxes in a shitty data center, and just sell 'em real cheap [16:19] heh [16:19] you could do semi-dedicated [16:19] there are quite a few shitty ones really close to my own [16:19] with vt-d hardware passthrough for ethernet/disks [16:19] err can't do passthrough for disks [16:19] it needs to be controller [16:19] so maybe just dedicated disk [16:20] yeah cuz you can do /dev/sda on vps #1, /dev/sdb on vps #2, etc... [16:20] like if you can do 4x2.5" in a blade [16:20] you could do dedicated ssd per semi-dedicated [16:20] and have 4 of them [16:20] although what if disk fails :) [16:20] yeah disk failure is always the catch [16:20] but the customer could just order 2 disks [16:21] yeah but then you need to be able to support 8x2.5" in a blade :) [16:21] wasn't it 4x2.5" * 12 servers? [16:21] well yeah or you just can't put 4x vps in the box [16:21] yeah [16:21] true [16:21] like $80/month 2 servers per box [16:22] 2 hard-disks each [16:22] yeah [16:22] 80*2 = 160 per blade [16:22] dedicated ethernet [16:22] $1920 per chassis [16:22] you can probably stick a cheap pci-e dual port adapter if want internal network as well [16:22] not a bad scenario [16:22] how much are blades? [16:22] the blades come with the chassis, but u still need to add RAM and CPU [16:22] oh real [16:23] i think it was like $5500 for the chassis [16:23] so that ebay price was terrible that i found? [16:23] oh that was more like ebay price :) [16:23] so it's still like $1000 to $1200 for a server [16:26] mostly just saving on rack space [16:27] *** mikeputnam has quit IRC (Ping timeout: 246 seconds) [16:27] *** twobithacker has quit IRC (Ping timeout: 246 seconds) [16:27] *** up_the_irons has quit IRC (Ping timeout: 246 seconds) [16:28] *** mikeputnam has joined #arpnetworks [16:29] O_o [16:29] *** up_the_irons has joined #arpnetworks [16:29] *** ChanServ sets mode: +o up_the_irons [16:29] mercutio: so you're saying the e3 cores are 8x real cores, but half the performance of the 4x l5540 cores? [16:30] wait the e3 only has 4 real cores [16:31] up_the_irons: err e3 is 4 real cores [16:31] l5520 is 8 real cores [16:31] they're similar performance [16:31] err [16:31] 2xl5520 i mean [16:31] which is that it had [16:31] that's according to cpumark [16:31] *** twobithacker has joined #arpnetworks [16:31] not real testing [16:31] i've never actualyl used a l5520 [16:32] ah ok [16:32] i just explored things a bit before. shipping is a bitch from the US [16:32] actually [16:32] 4711 passmark l5520 single cpu [16:32] 9457 passmark 1240v2 [16:33] 7625 passmark dual e5520 [16:33] i think e and l 5520 are same performance [16:33] passmark doesn't list dual l5520 [16:34] 8103 passmark 1230v1 [16:34] so really it's more similar to the 1230 [16:37] e and l are the same, yeah [16:37] just different voltage [16:37] not in all cpus [16:37] oh maybe in all cpus [16:37] oh? thought l5520 is the low voltage version of e5520 [16:37] it's the i5 low power versions that aren't the same [16:38] but they have suffix rather than prefix [16:38] yeah for the xeon's i believe "e" and "l" are the same, just diff voltage [16:38] 5520 is a lot lower power than 5420 from memory [16:39] they also do extended page table support [16:40] e3-1240v2 will be better power consumption though [16:42] from passmark dual opteron 4234 loosk good too [16:42] well mostly for being similar speed to e3 but cheap [16:43] *** [Derek] has quit IRC (Read error: Operation timed out) [16:44] yeah i think i have a couple 4200 cpu's around here [16:45] what's power usage like? [16:45] can't find anything to do with them since i used the 6200's in the newer VM hosts and I like the performance [16:45] ahh [16:45] i'll put the 4200's in the new backup box, yeah... [16:45] when trying to research 4234 found stuff about 6200 [16:45] you can't beat the core count [16:45] it's always the way [16:46] people always benchmark/discuss/talk about the higher end cpus [16:46] i mean for the price and cores, it's really nice. lots of cores is great for virtualization. [16:46] and ignore the new lower cost or lower power ones [16:46] yeah exactly [16:46] likes there's hardly any discussion about the low power i5s [16:46] u need to find the sweet spot [16:46] they reduce performance [16:46] but power matters too [16:46] i was mostly interested in whether it'd be quieter [16:47] that's even harder to determine though [16:48] intel are weird atm [16:48] there's e5-2687w and e5-2690 ... e5-2687w is faster [16:48] but not by a lot... it's also slightly cheaper [16:48] but both are real expensive [16:49] good if you pay per cpu or such though i suppose [16:49] for licensing [16:51] *** [Derek] has joined #arpnetworks [16:51] *** [Derek] has quit IRC (Changing host) [16:51] *** [Derek] has joined #arpnetworks [16:52] yeah the higher end intels just blow the bank [17:10] *** dzup has quit IRC (Ping timeout: 276 seconds) [18:54] *** dj_goku has joined #arpnetworks [18:54] *** dj_goku has quit IRC (Changing host) [18:54] *** dj_goku has joined #arpnetworks [19:01] up_the_irons: have you been training for any races? (running) [19:02] I just started running, thinking about maybe training for a 10k or something [20:42] up_the_irons: was I your first dedicated server sale? [21:10] up_the_irons: also have you considered offering SSD drives on the ARP Metal Dedicated servers? If so what price point would they be at? [21:59] *** HighJinx has quit IRC (Ping timeout: 255 seconds) [22:02] *** HighJinx has joined #arpnetworks [22:26] *** Ehtyar has quit IRC (Quit: Nice Scotty, now beam my clothes up too!) [22:29] amdprophet: no races. i messed up my lower back a year ago and haven't really been able to run since :( (but it's getting better) [22:30] darn :( [22:30] mnathani: no, the first sale was around last june or so [22:31] mnathani: have not considered ssd's yet [22:32] amdprophet: last race i did was a 500 yard sprint and i came in at 90 seconds [22:36] damn, that's pretty impressive [22:48] oh nice, you finally rolled out the dedi page [22:54] rawr [23:10] Webhostbudd: it's been there for months ;) [23:11] --- Day changed Tue Oct 23 2012 [23:11] 04:43 <@up_the_irons> Finally official: https://www.arpnetworks.com/dedicated [23:11] months! ^ [23:12] :) [23:12] <-- Master of logs [23:14] I'm more impressed by the 4-5 months of unofficial dedicated boxes [23:16] really? why? :) [23:17] Mostly that anyone knew about it without there being a page up (unless there was a page up... unofficially ;p) [23:20] lmao [23:29] brycec: there wasn't, but email works great for specs and quotes ;) [23:29] oh yeah i forgot to share this, here is a sales response i got from another host about ipv6: http://i.imgur.com/U1C5S.png [23:29] enjoy the laugh [23:33] I... uh... wha? [23:33] * brycec falls over [23:35] CPU down to 3C... [23:35] er, wrong channel [23:35] LOL [23:35] (unless you care about the fun of running a laptop inside a refrigerator) [23:36] 2C! [23:37] damnit my focus-follows-mouse is being very inaccuarte [23:55] Your mouse is distracted? [23:59] No... But with this latest awesome update, the focus seems to get stuck somewhere along the way about 50% of the time [23:59] just go cwm and be done with it ;-)