[00:06] *** mhoran has quit IRC (Read error: Connection reset by peer) [00:06] *** mhoran has joined #arpnetworks [00:06] *** ChanServ sets mode: +o mhoran [00:15] *** zeeshoem has quit IRC () [00:24] *** mhoran has quit IRC (Read error: Connection reset by peer) [00:24] *** mhoran has joined #arpnetworks [00:24] *** ChanServ sets mode: +o mhoran [00:32] *** spacey has quit IRC (Ping timeout: 252 seconds) [00:48] *** Ehtyar has quit IRC (Quit: IRC is just multiplayer notepad) [01:26] *** _awyeah__ has quit IRC (Read error: Connection reset by peer) [01:27] *** _awyeah__ has joined #arpnetworks [01:30] *** nukefree has quit IRC (Ping timeout: 245 seconds) [01:31] *** nukefree has joined #arpnetworks [01:36] *** notion has quit IRC (Ping timeout: 250 seconds) [01:38] *** notion has joined #arpnetworks [02:13] *** zeshoem has quit IRC (Ping timeout: 240 seconds) [02:28] *** spacey has joined #arpnetworks [03:26] *** gowhari has left [05:19] Channel survey: We're thinking of adding dedicated servers to our product line (it's a natural next step if one outgrows their VPS). Emphasis is on redundancy and reliability (just like our VPS hosts). Does this sound attractive: Intel Xeon 3.2GHz Quad Core, 16GB RAM, 2x 1TB HD, Dual PS (from separate UPS), Dual Uplinks (VRRP), /29 IP block, 20TB of bandwidth, for $169 ? :) [05:20] (oh, and IPMI/KVM/Remote Reboot, all that jazz included. I'd rather you reboot the box than wake me up at 3AM ;) [05:23] * up_the_irons stops looking at hardware and wanders off... [05:50] * CaZe is most interested in low cost [06:42] ^ [06:43] like.. if i was gonna get a dedicated server... that would be a wicked hot server indeed.. but cost is the main factor i would look at. [06:44] qbit_: The idea is that you'd only want a dedicated server from ARP if you need more resources than you can get from their biggest $89/month VPS [06:45] If you can fit in a vps then that will always be cheaper than dedicated hardware [06:48] right [06:50] lol - just saw the "$169" [06:51] yeah, that would be awesome [06:53] Is there a market for a cheap dediicated machine? [07:00] mike-burns: For the .uk market, I've found that most people (myself included!) who used to rent/colo cheap machines are happy with VPSs [07:02] I don't know how well that reflects the rest of the world though [07:02] Most individuals I know in north-east US are on VPSes now. [07:03] I know companies with dedicated machines, but they expect 24/7 support and multi-thousand USD pricetags. [07:13] I've seen dedicated machines for as little as $25/mo. [07:14] ^ me too.. always atom procs or something similar [07:14] nothing with 16g ram for 169 tho [07:19] Yes, well some people are only interested in the bandwidth, and don't trust/can't use VPSes. [07:30] *** Guest24486 has quit IRC (Remote host closed the connection) [07:31] *** Guest24486 has joined #arpnetworks [07:43] options are good. some use cases could benefit from hardware, some from bandwidth. [07:44] my current use case is cheap. if i were betting the farm on a business idea involving a central database server, hardware could make more sense. [07:45] but at this point, i just want to have a stable OpenBSD shell that isn't prone to my local ISP outages [07:52] * qbit_ eyes devio.us .. and it's always down-ness [09:23] up_the_irons: that would be attractive if I had need for co-lo, for sure. [09:24] up_the_irons: it's attractive to me :D [09:24] i would drop both my vps's ( one is with chunkhost ) and move to that beatch [09:27] up_the_irons: I'm not clueful on pricing of other equivalent sites but physical hardware with the features you mentioned would indeed be awesome. [09:27] up_the_irons: that's better connectivity than we get with most of our hosts here :P [09:27] (where here == work) [09:51] *** HighJinx has quit IRC (Quit: Computer has gone to sleep.) [10:12] *** HighJinx has joined #arpnetworks [11:10] *** heavysixer has quit IRC (Remote host closed the connection) [11:12] *** heavysixer has joined #arpnetworks [11:12] *** ChanServ sets mode: +o heavysixer [12:06] CaZe: Roger on the cost. I had figured that for many, the cost is a factor; but in that regard, that is what a vps is for. I thought about doing a "cheap" dedicated, but I don't think I want to get into that market. [12:07] mike-burns: i think there is a market for cheap dedicated machines, i see them offered everywhere. but it has not made sense for me to get into that market [12:08] qbit_: so cost is your main factor, but you think $169 is "awesome" for what you get? :) [12:08] yep [12:09] CaZe: and yeah, I've seen those Atom machines for very cheap too [12:10] qbit_: i've seen several companies do 16gb for $169, but not a lot. not sure how reliable they are, but their sites look good. ;) [12:10] heh [12:11] i had a colo for a while that tried to burn me .. gave me a 1500$ bill for using 2pb of data [12:11] when i had used 2g [12:11] they were super shady [12:12] 8gb for around that price is more popular. but what is funny, with that setup, is the extra 8gb (to bring to a 16gb total) is only like 75 bucks in hardware (DDR3 1333MHz). so why not bump it up?! [12:13] qbit_: lol, devio.us is down a lot? [12:13] mikeputnam: roger [12:13] kraigu: roger [12:13] up_the_irons: seems to go down once or twice / month [12:13] qbit_: o'rly? you'd want the dedicated box instead of your vps' ? [12:14] toddf: I have noted, "would indeed be awesome" :) [12:15] qbit_: colo's tend to be shady, i dunno why... [12:15] up_the_irons: ya [12:16] up_the_irons: the specs are awesome. I don't personally have a need for it, but if I did.. I'd be on it. [12:18] jpalmer: :) [12:20] i would make a use for it :D [12:22] haha [12:22] sounds like I already sold one... ;) [12:27] Those E3-1200 Xeon's are getting kick ass reviews... [12:27] And I especially like that they have a much lower power footprint than they used to [12:29] kew [12:34] up_the_irons: if you want to buy me one of those servers, and ship it.. I'll gladly accept it and give you my review in a year or two ;) [12:34] jpalmer: LOL [12:34] * jpalmer doesn't get whats funny about that offer. I'm offering a PERSONAL review! [12:35] it's priceless ;) [12:35] I don't even do that for the googles, HP's, and mcrosofts out there, It's a unique offer for ARP! [12:35] I'll even blog about your hardware. I have a core audience of like 6 people. you can't beat it! [12:36] might be 6 very important people... [12:36] ;) [12:36] brb [12:44] *** Guest24486 has quit IRC (Ping timeout: 276 seconds) [12:46] *** Guest24486 has joined #arpnetworks [12:49] *** portertech has quit IRC (Read error: Connection reset by peer) [13:11] up_the_irons: 'cheap dedicated' seems to be how people get rid of old machines :P [13:12] E3-1200 xeons are basically the newer i7 desktop chips [13:12] jdoe: hah, yeah i think you're right [13:12] jdoe: yeah, and I think the E5's are the server line, no? [13:19] *** portertech has joined #arpnetworks [13:19] *** ryk has joined #arpnetworks [13:19] hi [13:19] how can I buy an additional vps from within my control panel? [13:20] or do i just need to go to the front page and buy? if so, how can i ensure it gets linked to my existing account [13:21] up_the_irons: I no longer have anything approaching a clue how intel's chip branding works. My guess is they're trying for a 'budget' server chip line with the e3/e5 differentiation. But I have no idea. [13:22] ryk: you just make sure the email address on the order is the same, they will automatically be linked. Take care to order a /29 IP block if you do not have one already (otherwise you will not have any free IPs to assign to the new vps) [13:22] jdoe: roger [13:23] thanks up_the_irons [13:23] ryk: np! [13:25] up_the_irons: maybe he wants IPv6 only! [13:26] jpalmer: LOL [13:26] you're right, i should not assume... [13:26] hehe [13:26] * jpalmer is in an odd mood today, if you hadn't noticed. [13:38] up_the_irons: can i PM you with an IP question? [13:38] related to the VPS [13:39] just don't want to paste all the IP's in here [13:39] oh hai ryk :D [13:40] you must have me on /hilight [13:41] ryk: sure [13:44] no [13:44] it's this this is a pretty low chatter chan [13:46] i thoroughly enjoy the chatter in #prgmr [13:46] as they use it as their own internal IM [13:46] but publicly for everyone to see [13:48] up_the_irons: so at work here we have this 1 VPS, which up to this point I haven't managed, and now I'm getting a 2nd [13:48] so i don't know much about your policies here yet. [13:49] for example how would i request a native ipv6 /64 [13:49] ryk: you already have native IPv6 for your account, and the new VPS would share that block [13:53] ok thanks [13:53] up_the_irons, how's your ipv6 day? [13:54] (considering you've been on ipv6 for so long now) [13:55] tooth: I did not notice any increase in traffic [13:57] i had one single customer request ask if our product was ipv6-capable [14:36] so for ipv6, i can just take whatever i want from my existing allocated block and assign it to an interface, right? [14:36] (on the new instance) [14:38] :O i need to change my ip to have something to do wit h c0ffee [14:40] are you sure not beef? [14:41] because beef:15:600d:f00d [14:41] i came up with that all by myself [14:41] because 1:15:1337:d00d [14:46] i made ::dead:beef:cafe as a start [14:46] ::dead:babe:cafe also works [15:22] *** jdoe is now known as kalp [15:22] *** kalp is now known as jdoe [15:39] ryk: yeah, you can assign anything from the existing block to an interface [15:44] tooth: ::babe:15:a:d00d [15:44] ah i crack myself up. [15:59] ryk: yes, you are auto-assigned a /64. it's yours to assign in any way you want. [15:59] thanks, just making sure [16:00] since up_the_irons said he would need to do something for the ipv4 IP in the existing /29 [16:00] didn't know if that was the case for the ipv6 /64 [16:00] autoassigned meaning, ARP assigns a /64 to your account. not in the ipv6 autodiscover/reoute advertisement sense. you'll have to manually configure it. [16:01] well, in IPv4, he defaults to giving you a small subnet. when you add a host, you don't have enough IP's, so you have to request a larger v4 subnet. however, a /64 in ipv6 is more than enough for you to add a host :P [16:02] ryk: we don't do anything to the /29 either; we just tell your new vps to one IP within it [16:02] also worth noting.. your vps's will all be on the same VLAN, and IIRC.. traffic between them doesn't count against your monthly alotment [16:02] jpalmer: that's correct [16:02] oh so up_the_irons is just preconfiguring the VPS then with the ipv4. that's convenient. [16:03] he's all nice like that. [16:03] until you get to know him. then, he's pretty mean. like.. refusing to ship you new servers for free and stuff. as if it's too much to ask. [16:04] up_the_irons: i appreciate the way you do your routing [16:04] over at prgmr we are all sitting on a /24 [16:04] all I want is one server. 16 cores, 64gb RAM, and 4 SAS drives. shipped to my house, for $0 down, and $0/month. Is that so much to ask? [16:04] conceivably i can envision someone causing a broadcast storm and killing my connection. [16:05] jpalmer: LOL [16:06] ryk: I appreciate that you appreciate it! I felt my decision to do it my way in the beginning leads to a much cleaner design. [16:06] so many things just "fall into place" when you separate customers by VLAN [16:06] ryk: i would imagine they have some rate limits on broadcasts somewhere [16:07] ryk: and the layer-2 firewall rules to "hide" customers' traffic from one another on a /24 must give one a migraine [16:08] well [16:08] considering they are not yet even monitoring traffic [16:08] i doubt they have anything in place. [16:08] haha [16:08] u don't get bandwidth graphs or anything? [16:09] last i heard, their plan for giving us access to monitoring was to set up vnstat somewhere [16:09] no, nothing yet. [16:09] obviously i can install vnstat on the vm itself [16:09] yeah [16:10] i am drastically over my bandwidth allotment and they've never mentioned it [16:10] i wonder if I should lift my self-imposed embargo on $5 accounts and just start offering them... 128MB RAM for $5, same amount of disk as $10 plan. This would blow away prgmr's offering at $5... [16:10] by an order of magnitude actually [16:10] haha [16:12] well, that depends. why did you have the embargo in the first place? :) [16:13] probably to avoid the LEB crowd [16:14] LEB? [16:14] lowendbox [16:14] http://www.lowendbox.com/ [16:15] blog of cheap vps deals, mostly openvz [16:15] not exactly your premium business accounts [16:15] people torrenting using up all your disk i/o [16:15] that's my guess [16:17] oh, yea. if it starts affecting other services, kill em [16:18] so say you are up_the_irons and you have a box [16:18] you need to provision it for say 50 256MB VPS's or 100 128MB VPS's [16:18] maybe 7200 rpm disks are fine for the 256MB VPS's [16:18] but what do you do for the 128MB VPS's? go 15k SAS? [16:19] either your customers have slow disk, or you have higher costs. [16:19] maybe the former is fine as the market will bear it [16:19] jpalmer: embargo in the first place: I don't want to do _anything_ for just $5; one support ticket and there goes that month's profit [16:19] but maybe you don't want to give the appearance of selling both golf carts and cadillacs. [16:20] up_the_irons: makes sense [16:20] ryk: you could just have less VMs per box; or assume that a VM with 128MB of RAM is not doing that much I/O [16:20] up_the_irons: so if I start sending you a dozen support tickets, will you terminate me? [16:20] jpalmer: no, i would just slap you [16:20] haha [16:20] and take back that server i was gonna send u [16:21] up_the_irons: yeah not sure. maybe only 50% are torrenting and the other half are just using it for IRC, not sure. [16:21] up_the_irons: in all seriousness though, how close are we to being able to change our own disk images in the cd tray? Do you need any help with that? [16:21] or is that not your most common ticket request these days after DNS? [16:23] jpalmer: it's finally on the project list for 2nd half of this year (so like starting now), but then so is a formal backup service. I've had more requests for backups than anything else, so that will probably take precedence *unless* I can find some low hanging fruit in the ISO department and solve that quickly. But so far, nothing has come to me. [16:23] up_the_irons: doesn't virsh let you specify a full http:// URL for the ISO? [16:23] jpalmer: it is a common ticket request, but it is also something I can do quite quickly [16:24] ryk: not in the older version I'm running [16:24] ah. what OS are the hosts running? [16:24] so a libvirt upgrade is a dependency for the ISO project, which *also* has a dependency on getting it to play nice with apparmor, so yeah.. i'm fucked [16:24] ryk: Ubuntu Jaunty [16:25] 12.04 is out of the question? [16:25] don't get me wrong, it's gonna get done, it's just a matter of devoting time and hammering it out [16:25] ryk: when I upgrade, it will be to 10.04 [16:25] ryk: 12.04 is too big a jump to make me comfortable [16:26] up_the_irons: have you got ideas on what you want to do for backups yet? [16:26] stuff runs really well, i don't wanna rock the boat [16:28] jpalmer: i want to focus on something that allows block device backup, not a mounted filesystem type one (like Linode). mounted fs can work for Linode cuz they are mainly Linux, but I pride myself on being OS agnostic, so if someone has Plan9, or ZFS on root, or something exotic, I should still be able to back it up. Even mounting UFS2 on Linux makes me uncomfortable. Who knows if all [16:28] attributes will be preserved, etc... [16:28] are you using lvm on the hosts? [16:29] i'm assuming the VMs live on the hosts and not a san [16:30] jpalmer: so, initial idea is to schedule an LVM snapshot and then block copy that over the wire to a backup host. *That* part is easy peasy, but what about incrementals? I was toying around with the idea of drbd for syncing remote block devices, which does *work*, (kinda what it is made for), _however_, I have had issues with drbd crashing an entire box, I don't know if I want to rely on [16:30] something that does that [16:30] fwiw up_the_irons on the pricing issue. $5 is probably the right price to get hobbyists, like my personal stuff [16:30] ryk: yes, LVM on local storage [16:30] ryk: roger [16:30] but high enough to keep away the riff-raff that go for the $2 or $3 deals [16:31] yeah, i would *never* do $2 or $3 [16:31] $5 is absolute lowest [16:31] up_the_irons: understood. is your plan to make this so a customer can restore his own data if needed? or would that be something they'd need to submit a ticket for (and possibly for an additional fee) [16:31] may i suggest a zfs box for your backup box? [16:31] wonder if $7 would work... lucky number. [16:31] then you can use dedup [16:31] jpalmer: initially ticket but ultimately self-managed [16:32] your full backups suddenly compress to incrementals [16:32] yeah, data dedup would be awesome, on the storage side. [16:32] ryk: does the client need ZFS too? [16:32] no [16:32] say you set up an nfs share or whatever [16:32] though, you'd still be transferring "full backups" worth of data over, potentially (depending on how it's done) [16:32] ryk: that still wouldn't solve having to *transfer* it all each time though [16:33] ryk: and that is a bit more of a concern, b/c transferring large block devices is pretty I/O taxing [16:33] true. i was assuming it would be physically local [16:33] and thus bandwidth wouldn't be an issue [16:34] up_the_irons: and I presume having a backup agent installed on the VM is out too? you want to do this at the host machine level? [16:34] well, yes, in the same cage (local), but that isn't the big concern, it is more like "dd if=/dev/vol/cust-disk | ssh backup-host of=/dev/vol/cust-disk-backup" <<-- *that* is taxing [16:34] so ZFS to compress is fine, but i'd still need to do the transfer, which sucks [16:35] so what you're looking for is an incremental lvm snapshot, which doesn't exist [16:35] jpalmer: i have considered backup agent on VM, it is not out of the question; but if I could find a way to do it on the host, that is SO MUCH WIN b/c it is totally transparent. Ease of Use = +1 :) [16:36] ryk: right, incremental LVM. but don't be so sure it could not be engineered. You could run something *on top* of the LVM, like drbd, to sync the volumes. [16:37] *nod* understood. but I just don't know it's an option at the host level, without killing your I/O for periods of time. with an agent, you can defualt to having it installed. clients who don't want backups can remove it (saving you bandwidth, and storage) those who do, would be able to get full/differential/incremental. without lkilling the host. [16:37] (I'm thinking of something like bacula) and, giving each client bacula console access wouldn't be overly difficult from within the VM [16:38] bah bacula [16:38] as an example :P [16:38] up_the_irons is right [16:38] it should be transparent [16:38] otherwise i may as well set up my own bacula server elsewhere [16:38] jpalmer: yeah, that's true. i'm not totally against it. I would need to find an agent that runs well across different OS', cuz I don't want to manage different agents [16:38] should be, yes. but I dunno if transparent is a real option at this point. drbd.. isn't the most reliable thing available. [16:39] you could just give us access to a storage box that's all firewalled in [16:39] and say "rsync your stuff to it" [16:39] ryk: jpalmer : there is some value is setting it up as a default install, however (and u guys wouldn't know this, so I'm telling you :), lots of customers do their own installs, wiping my setup, so they would have to install the agent anyway [16:40] ryk: well, ignoring the transparency thing.. yeah, he could just provide an iscsi or NFS mountpoint for each client, and let them deal with their own backups. [16:40] ryk: that's an idea.. clean, simple, requires some knowledge that most already know. just need to figure out quotas and making sure cust-A can't read cust-B stuff. [16:41] just mount it as another disk, even, automatically to our VPS. [16:41] *nod* [16:41] do your lvm thing on it [16:41] I actually like that idea. [16:41] wait, so... wut? [16:41] :) [16:41] i got lost at "lvm thing on it" [16:41] * jpalmer hushes and lets ryk explain [16:42] * ryk gets nervous [16:42] i could mount a remote storage volume to a vps, yeah. say, iscsi, or even AoE. but iscsi seems to be the thing these days. [16:42] um, i meant [16:42] regarding cust-a vs cust-b [16:42] nevermind. :( [16:43] * up_the_irons listens [16:43] not even sure what i was going to say there. [16:43] i had something in my head about permissions [16:43] jpalmer: ok, so what was your interpretation of what ryk said? :) [16:44] ryk: i did the rsync thing in the very beginning, but them permissions became an issues, and i can't remember why i didn't pursue it, but i just didn't... [16:44] ok here's what i was thinking, i think [16:45] currently, isn't it permissions that keep the VM's disk images separate? [16:45] does every VM run under a different user? [16:45] up_the_irons: well, I was thinking along the lines of something like.. a password protected NFS, SMB, FTP whatever share for each client. The client can mount it locally, and write their own backups scripts to utilize it. you already have the users account username/password for the portal.. you could potentially tie those togather. [16:45] if permissions are solved, i think iteration 1 for backups could simply be a chunk of storage on a backup host and give the customer a way to rsync to it [16:46] up_the_irons: but, if you're going to go to that length.. then you're basically just providing them with additional storage that they could utilize for things other than backups. [16:46] ryk: disk images are separate just by volume-A being assign to VM-A, and nothing else [16:47] jpalmer: does it matter tough? [16:47] they're paying for it [16:47] jpalmer: since the backup space is a paid service, i may not care what one uses it for :) [16:47] ok, in that case it's a moot point. [16:48] it will be "low end" space, like very large volumes, slow disks, meant for backups. it doesn't need to be fast, just available when needed. [16:49] up_the_irons: so mount the whole remote volume to the host, create logical lvm volumes for each VM of a paying customer, and assign that as a block device directly to the VM, is that possible? [16:49] jpalmer: why NFS, SMB, or FTP, when it could simply be a user account on a host with large disks? then it's like "rsync -ave ssh / backup-host:/home/foo" [16:50] up_the_irons: I was simply using those as examples. you can use whatever transport or protocol you think fits your environment better. [16:50] telnet [16:50] jpalmer: one thing that comes to mind: FreeBSD / OpenBSD / etc... rsync to Linux host. would be best if native. [16:50] ryk: LOL [16:50] oh wait, that's not a transport protocol [16:51] then again he is talking about layer 7 [16:51] jpalmer: y u confuse me [16:51] ryk: mount remote volume to local VM, yes, that would be possible. I would just have to figure out the medium, like iSCSI or AoE, etc... [16:51] well what i am suggesting [16:51] to simplify it to the end-customer, so that they don't have to worry about iSCSI [16:52] ryk: the remote volume would be 'seen' as a regular disk and thus could be formatted natively [16:52] is to mount it iSCSI yourself on your host [16:52] I think we're all 3 saying the same thing, in different ways :) [16:52] ryk: yeah that's what I meant, iSCSI on the host [16:52] aye [16:52] jpalmer: well, the rsync idea was a little different :) [16:53] that wouldn't require iSCSI, or remotely mounted volumes, which takes out a lot of complexity and thus is attractive [16:53] i think either would work, there are probably pros and cons to both but i would be happy with either [16:53] essentially, we've all three discussed: san -> mounted to host vm server -> carved up and mounted to individual vms (in a nutshell) [16:53] jpalmer: yeah [16:53] the other option being basically a box where everyone has a shell account and a lot of quota [16:54] the rsync server idea has merit. also. [16:54] if anyone has any experience with SANs, I'm all ears, cuz I've mostly stayed away from them [16:54] that one sounds like more trouble though when you consider you're now essentially managing a shell service. [16:54] ryk: jpalmer : yeah, shell account with large quota is dead simple. i like dead simple. [16:54] up_the_irons: i only have some experience with freenas [16:55] that would be perfectly sufficient [16:55] are there any rsync-only shells? not sure how you'd make it so the account can *only* be used for rsync and nothing else. [16:55] i mount my esxi boxes to freenas via nfs and iscsi. would work for unbutu as well. [16:55] in fact the freenas box is used exclusively for backup [16:55] jpalmer: there is one, rsync.net and htey do just that. [16:56] ryk: well, it is possible to only allow rsync to shell in, and nothing else. so managing shell service is not _too_ big a deal. I already manage shells with console.cust.arpnetworks.com, and I have that all automated at this point (SSH pub key submission, etc...) [16:56] * jpalmer is familiar with rsync.net. I mean, a shell as in rbash-esque [16:57] jpalmer: you can do an rsync only shell like this in .ssh/authorized_keys: command="/foo/validate-rsync.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty [16:58] oh, true. [16:58] heh, I was thinking of rbash and such. forgot that option to ssh. [16:58] [FBI]: off [16:58] *** [FBI] has left [18:41] *** [FBI] starts logging #arpnetworks at Wed Jun 06 18:41:18 2012 [18:41] *** [FBI] has joined #arpnetworks [18:46] *** swajr has quit IRC (Ping timeout: 240 seconds) [18:53] *** HighJinx has quit IRC (Quit: Computer has gone to sleep.) [18:59] ryk: jpalmer : i summarized what we talked about: https://gist.github.com/9919b9686e36bf7592c0 [18:59] feel free to edit, fork, w/e... [19:06] lol backups [19:19] *** HighJinx has joined #arpnetworks [22:44] HighJinx: you played with the E3 or E5 CPUs at all? [22:45] The E3's have great reviews and don't burn down the bank [22:46] i havent left the core series yet up_the_irons [22:47] HighJinx: gotcha [22:47] the new ivybridge based stuff is supposed to be pretty good tho [22:53] HighJinx: yeah i'm looking at that stuff too [22:54] the E3-1230 V2 at just 68W TDP is very attractive [22:55] damn [22:57] *** zeshoem has joined #arpnetworks [22:57] *** zeeshoem has joined #arpnetworks [22:58] *** zeeshoem has quit IRC (Client Quit)