***: mhoran has joined #arpnetworks
ChanServ sets mode: +o mhoran
zeeshoem has quit IRC ()
mhoran has quit IRC (Read error: Connection reset by peer)
mhoran has joined #arpnetworks
ChanServ sets mode: +o mhoran
spacey has quit IRC (Ping timeout: 252 seconds)
Ehtyar has quit IRC (Quit: IRC is just multiplayer notepad)
_awyeah__ has quit IRC (Read error: Connection reset by peer)
_awyeah__ has joined #arpnetworks
nukefree has quit IRC (Ping timeout: 245 seconds)
nukefree has joined #arpnetworks
notion has quit IRC (Ping timeout: 250 seconds)
notion has joined #arpnetworks
zeshoem has quit IRC (Ping timeout: 240 seconds)
spacey has joined #arpnetworks
gowhari has left up_the_irons: Channel survey: We're thinking of adding dedicated servers to our product line (it's a natural next step if one outgrows their VPS). Emphasis is on redundancy and reliability (just like our VPS hosts). Does this sound attractive: Intel Xeon 3.2GHz Quad Core, 16GB RAM, 2x 1TB HD, Dual PS (from separate UPS), Dual Uplinks (VRRP), /29 IP block, 20TB of bandwidth, for $169 ? :)
(oh, and IPMI/KVM/Remote Reboot, all that jazz included. I'd rather you reboot the box than wake me up at 3AM ;) -: up_the_irons stops looking at hardware and wanders off...
CaZe is most interested in low cost qbit_: ^
like.. if i was gonna get a dedicated server... that would be a wicked hot server indeed.. but cost is the main factor i would look at. plett: qbit_: The idea is that you'd only want a dedicated server from ARP if you need more resources than you can get from their biggest $89/month VPS
If you can fit in a vps then that will always be cheaper than dedicated hardware qbit_: right
lol - just saw the "$169"
yeah, that would be awesome mike-burns: Is there a market for a cheap dediicated machine? plett: mike-burns: For the .uk market, I've found that most people (myself included!) who used to rent/colo cheap machines are happy with VPSs
I don't know how well that reflects the rest of the world though mike-burns: Most individuals I know in north-east US are on VPSes now.
I know companies with dedicated machines, but they expect 24/7 support and multi-thousand USD pricetags. CaZe: I've seen dedicated machines for as little as $25/mo. qbit_: ^ me too.. always atom procs or something similar
nothing with 16g ram for 169 tho CaZe: Yes, well some people are only interested in the bandwidth, and don't trust/can't use VPSes. ***: Guest24486 has quit IRC (Remote host closed the connection)
Guest24486 has joined #arpnetworks mikeputnam: options are good. some use cases could benefit from hardware, some from bandwidth.
my current use case is cheap. if i were betting the farm on a business idea involving a central database server, hardware could make more sense.
but at this point, i just want to have a stable OpenBSD shell that isn't prone to my local ISP outages -: qbit_ eyes devio.us .. and it's always down-ness kraigu: up_the_irons: that would be attractive if I had need for co-lo, for sure. qbit_: up_the_irons: it's attractive to me :D
i would drop both my vps's ( one is with chunkhost ) and move to that beatch toddf: up_the_irons: I'm not clueful on pricing of other equivalent sites but physical hardware with the features you mentioned would indeed be awesome. kraigu: up_the_irons: that's better connectivity than we get with most of our hosts here :P
(where here == work) ***: HighJinx has quit IRC (Quit: Computer has gone to sleep.)
HighJinx has joined #arpnetworks
heavysixer has quit IRC (Remote host closed the connection)
heavysixer has joined #arpnetworks
ChanServ sets mode: +o heavysixer up_the_irons: CaZe: Roger on the cost. I had figured that for many, the cost is a factor; but in that regard, that is what a vps is for. I thought about doing a "cheap" dedicated, but I don't think I want to get into that market.
mike-burns: i think there is a market for cheap dedicated machines, i see them offered everywhere. but it has not made sense for me to get into that market
qbit_: so cost is your main factor, but you think $169 is "awesome" for what you get? :) qbit_: yep up_the_irons: CaZe: and yeah, I've seen those Atom machines for very cheap too
qbit_: i've seen several companies do 16gb for $169, but not a lot. not sure how reliable they are, but their sites look good. ;) qbit_: heh
i had a colo for a while that tried to burn me .. gave me a 1500$ bill for using 2pb of data
when i had used 2g
they were super shady up_the_irons: 8gb for around that price is more popular. but what is funny, with that setup, is the extra 8gb (to bring to a 16gb total) is only like 75 bucks in hardware (DDR3 1333MHz). so why not bump it up?!
qbit_: lol, devio.us is down a lot?
mikeputnam: roger
kraigu: roger qbit_: up_the_irons: seems to go down once or twice / month up_the_irons: qbit_: o'rly? you'd want the dedicated box instead of your vps' ?
toddf: I have noted, "would indeed be awesome" :)
qbit_: colo's tend to be shady, i dunno why... qbit_: up_the_irons: ya jpalmer: up_the_irons: the specs are awesome. I don't personally have a need for it, but if I did.. I'd be on it. up_the_irons: jpalmer: :) qbit_: i would make a use for it :D up_the_irons: haha
sounds like I already sold one... ;)
Those E3-1200 Xeon's are getting kick ass reviews...
And I especially like that they have a much lower power footprint than they used to qbit_: kew jpalmer: up_the_irons: if you want to buy me one of those servers, and ship it.. I'll gladly accept it and give you my review in a year or two ;) up_the_irons: jpalmer: LOL -: jpalmer doesn't get whats funny about that offer. I'm offering a PERSONAL review! jpalmer: it's priceless ;)
I don't even do that for the googles, HP's, and mcrosofts out there, It's a unique offer for ARP!
I'll even blog about your hardware. I have a core audience of like 6 people. you can't beat it! up_the_irons: might be 6 very important people...
;)
brb ***: Guest24486 has quit IRC (Ping timeout: 276 seconds)
Guest24486 has joined #arpnetworks
portertech has quit IRC (Read error: Connection reset by peer) jdoe: up_the_irons: 'cheap dedicated' seems to be how people get rid of old machines :P
E3-1200 xeons are basically the newer i7 desktop chips up_the_irons: jdoe: hah, yeah i think you're right
jdoe: yeah, and I think the E5's are the server line, no? ***: portertech has joined #arpnetworks
ryk has joined #arpnetworks ryk: hi
how can I buy an additional vps from within my control panel?
or do i just need to go to the front page and buy? if so, how can i ensure it gets linked to my existing account jdoe: up_the_irons: I no longer have anything approaching a clue how intel's chip branding works. My guess is they're trying for a 'budget' server chip line with the e3/e5 differentiation. But I have no idea. up_the_irons: ryk: you just make sure the email address on the order is the same, they will automatically be linked. Take care to order a /29 IP block if you do not have one already (otherwise you will not have any free IPs to assign to the new vps)
jdoe: roger ryk: thanks up_the_irons up_the_irons: ryk: np! jpalmer: up_the_irons: maybe he wants IPv6 only! up_the_irons: jpalmer: LOL
you're right, i should not assume... jpalmer: hehe -: jpalmer is in an odd mood today, if you hadn't noticed. ryk: up_the_irons: can i PM you with an IP question?
related to the VPS
just don't want to paste all the IP's in here qbit_: oh hai ryk :D ryk: you must have me on /hilight up_the_irons: ryk: sure qbit_: no
it's this this is a pretty low chatter chan ryk: i thoroughly enjoy the chatter in #prgmr
as they use it as their own internal IM
but publicly for everyone to see
up_the_irons: so at work here we have this 1 VPS, which up to this point I haven't managed, and now I'm getting a 2nd
so i don't know much about your policies here yet.
for example how would i request a native ipv6 /64 up_the_irons: ryk: you already have native IPv6 for your account, and the new VPS would share that block ryk: ok thanks tooth: up_the_irons, how's your ipv6 day?
(considering you've been on ipv6 for so long now) up_the_irons: tooth: I did not notice any increase in traffic ryk: i had one single customer request ask if our product was ipv6-capable
so for ipv6, i can just take whatever i want from my existing allocated block and assign it to an interface, right?
(on the new instance) qbit_: :O i need to change my ip to have something to do wit h c0ffee ryk: are you sure not beef?
because beef:15:600d:f00d
i came up with that all by myself
because 1:15:1337:d00d tooth: i made ::dead:beef:cafe as a start
::dead:babe:cafe also works ***: jdoe is now known as kalp
kalp is now known as jdoe up_the_irons: ryk: yeah, you can assign anything from the existing block to an interface ryk: tooth: ::babe:15:a:d00d
ah i crack myself up. jpalmer: ryk: yes, you are auto-assigned a /64. it's yours to assign in any way you want. ryk: thanks, just making sure
since up_the_irons said he would need to do something for the ipv4 IP in the existing /29
didn't know if that was the case for the ipv6 /64 jpalmer: autoassigned meaning, ARP assigns a /64 to your account. not in the ipv6 autodiscover/reoute advertisement sense. you'll have to manually configure it.
well, in IPv4, he defaults to giving you a small subnet. when you add a host, you don't have enough IP's, so you have to request a larger v4 subnet. however, a /64 in ipv6 is more than enough for you to add a host :P up_the_irons: ryk: we don't do anything to the /29 either; we just tell your new vps to one IP within it jpalmer: also worth noting.. your vps's will all be on the same VLAN, and IIRC.. traffic between them doesn't count against your monthly alotment up_the_irons: jpalmer: that's correct ryk: oh so up_the_irons is just preconfiguring the VPS then with the ipv4. that's convenient. jpalmer: he's all nice like that.
until you get to know him. then, he's pretty mean. like.. refusing to ship you new servers for free and stuff. as if it's too much to ask. ryk: up_the_irons: i appreciate the way you do your routing
over at prgmr we are all sitting on a /24 jpalmer: all I want is one server. 16 cores, 64gb RAM, and 4 SAS drives. shipped to my house, for $0 down, and $0/month. Is that so much to ask? ryk: conceivably i can envision someone causing a broadcast storm and killing my connection. up_the_irons: jpalmer: LOL
ryk: I appreciate that you appreciate it! I felt my decision to do it my way in the beginning leads to a much cleaner design.
so many things just "fall into place" when you separate customers by VLAN
ryk: i would imagine they have some rate limits on broadcasts somewhere
ryk: and the layer-2 firewall rules to "hide" customers' traffic from one another on a /24 must give one a migraine ryk: well
considering they are not yet even monitoring traffic
i doubt they have anything in place. up_the_irons: haha
u don't get bandwidth graphs or anything? ryk: last i heard, their plan for giving us access to monitoring was to set up vnstat somewhere
no, nothing yet.
obviously i can install vnstat on the vm itself up_the_irons: yeah ryk: i am drastically over my bandwidth allotment and they've never mentioned it up_the_irons: i wonder if I should lift my self-imposed embargo on $5 accounts and just start offering them... 128MB RAM for $5, same amount of disk as $10 plan. This would blow away prgmr's offering at $5... ryk: by an order of magnitude actually up_the_irons: haha jpalmer: well, that depends. why did you have the embargo in the first place? :) ryk: probably to avoid the LEB crowd jpalmer: LEB? ryk: lowendbox
http://www.lowendbox.com/
blog of cheap vps deals, mostly openvz
not exactly your premium business accounts
people torrenting using up all your disk i/o
that's my guess jpalmer: oh, yea. if it starts affecting other services, kill em ryk: so say you are up_the_irons and you have a box
you need to provision it for say 50 256MB VPS's or 100 128MB VPS's
maybe 7200 rpm disks are fine for the 256MB VPS's
but what do you do for the 128MB VPS's? go 15k SAS?
either your customers have slow disk, or you have higher costs.
maybe the former is fine as the market will bear it up_the_irons: jpalmer: embargo in the first place: I don't want to do _anything_ for just $5; one support ticket and there goes that month's profit ryk: but maybe you don't want to give the appearance of selling both golf carts and cadillacs. jpalmer: up_the_irons: makes sense up_the_irons: ryk: you could just have less VMs per box; or assume that a VM with 128MB of RAM is not doing that much I/O jpalmer: up_the_irons: so if I start sending you a dozen support tickets, will you terminate me? up_the_irons: jpalmer: no, i would just slap you jpalmer: haha up_the_irons: and take back that server i was gonna send u ryk: up_the_irons: yeah not sure. maybe only 50% are torrenting and the other half are just using it for IRC, not sure. jpalmer: up_the_irons: in all seriousness though, how close are we to being able to change our own disk images in the cd tray? Do you need any help with that?
or is that not your most common ticket request these days after DNS? up_the_irons: jpalmer: it's finally on the project list for 2nd half of this year (so like starting now), but then so is a formal backup service. I've had more requests for backups than anything else, so that will probably take precedence *unless* I can find some low hanging fruit in the ISO department and solve that quickly. But so far, nothing has come to me. ryk: up_the_irons: doesn't virsh let you specify a full http:// URL for the ISO? up_the_irons: jpalmer: it is a common ticket request, but it is also something I can do quite quickly
ryk: not in the older version I'm running ryk: ah. what OS are the hosts running? up_the_irons: so a libvirt upgrade is a dependency for the ISO project, which *also* has a dependency on getting it to play nice with apparmor, so yeah.. i'm fucked
ryk: Ubuntu Jaunty ryk: 12.04 is out of the question? up_the_irons: don't get me wrong, it's gonna get done, it's just a matter of devoting time and hammering it out
ryk: when I upgrade, it will be to 10.04
ryk: 12.04 is too big a jump to make me comfortable jpalmer: up_the_irons: have you got ideas on what you want to do for backups yet? up_the_irons: stuff runs really well, i don't wanna rock the boat
jpalmer: i want to focus on something that allows block device backup, not a mounted filesystem type one (like Linode). mounted fs can work for Linode cuz they are mainly Linux, but I pride myself on being OS agnostic, so if someone has Plan9, or ZFS on root, or something exotic, I should still be able to back it up. Even mounting UFS2 on Linux makes me uncomfortable. Who knows if all
attributes will be preserved, etc... ryk: are you using lvm on the hosts?
i'm assuming the VMs live on the hosts and not a san up_the_irons: jpalmer: so, initial idea is to schedule an LVM snapshot and then block copy that over the wire to a backup host. *That* part is easy peasy, but what about incrementals? I was toying around with the idea of drbd for syncing remote block devices, which does *work*, (kinda what it is made for), _however_, I have had issues with drbd crashing an entire box, I don't know if I want to rely on
something that does that ryk: fwiw up_the_irons on the pricing issue. $5 is probably the right price to get hobbyists, like my personal stuff up_the_irons: ryk: yes, LVM on local storage
ryk: roger ryk: but high enough to keep away the riff-raff that go for the $2 or $3 deals up_the_irons: yeah, i would *never* do $2 or $3
$5 is absolute lowest jpalmer: up_the_irons: understood. is your plan to make this so a customer can restore his own data if needed? or would that be something they'd need to submit a ticket for (and possibly for an additional fee) ryk: may i suggest a zfs box for your backup box? up_the_irons: wonder if $7 would work... lucky number. ryk: then you can use dedup up_the_irons: jpalmer: initially ticket but ultimately self-managed ryk: your full backups suddenly compress to incrementals jpalmer: yeah, data dedup would be awesome, on the storage side. up_the_irons: ryk: does the client need ZFS too? ryk: no
say you set up an nfs share or whatever jpalmer: though, you'd still be transferring "full backups" worth of data over, potentially (depending on how it's done) up_the_irons: ryk: that still wouldn't solve having to *transfer* it all each time though
ryk: and that is a bit more of a concern, b/c transferring large block devices is pretty I/O taxing ryk: true. i was assuming it would be physically local
and thus bandwidth wouldn't be an issue jpalmer: up_the_irons: and I presume having a backup agent installed on the VM is out too? you want to do this at the host machine level? up_the_irons: well, yes, in the same cage (local), but that isn't the big concern, it is more like "dd if=/dev/vol/cust-disk | ssh backup-host of=/dev/vol/cust-disk-backup" <<-- *that* is taxing
so ZFS to compress is fine, but i'd still need to do the transfer, which sucks ryk: so what you're looking for is an incremental lvm snapshot, which doesn't exist up_the_irons: jpalmer: i have considered backup agent on VM, it is not out of the question; but if I could find a way to do it on the host, that is SO MUCH WIN b/c it is totally transparent. Ease of Use = +1 :)
ryk: right, incremental LVM. but don't be so sure it could not be engineered. You could run something *on top* of the LVM, like drbd, to sync the volumes. jpalmer: *nod* understood. but I just don't know it's an option at the host level, without killing your I/O for periods of time. with an agent, you can defualt to having it installed. clients who don't want backups can remove it (saving you bandwidth, and storage) those who do, would be able to get full/differential/incremental. without lkilling the host.
(I'm thinking of something like bacula) and, giving each client bacula console access wouldn't be overly difficult from within the VM ryk: bah bacula jpalmer: as an example :P ryk: up_the_irons is right
it should be transparent
otherwise i may as well set up my own bacula server elsewhere up_the_irons: jpalmer: yeah, that's true. i'm not totally against it. I would need to find an agent that runs well across different OS', cuz I don't want to manage different agents jpalmer: should be, yes. but I dunno if transparent is a real option at this point. drbd.. isn't the most reliable thing available. ryk: you could just give us access to a storage box that's all firewalled in
and say "rsync your stuff to it" up_the_irons: ryk: jpalmer : there is some value is setting it up as a default install, however (and u guys wouldn't know this, so I'm telling you :), lots of customers do their own installs, wiping my setup, so they would have to install the agent anyway jpalmer: ryk: well, ignoring the transparency thing.. yeah, he could just provide an iscsi or NFS mountpoint for each client, and let them deal with their own backups. up_the_irons: ryk: that's an idea.. clean, simple, requires some knowledge that most already know. just need to figure out quotas and making sure cust-A can't read cust-B stuff. ryk: just mount it as another disk, even, automatically to our VPS. jpalmer: *nod* ryk: do your lvm thing on it jpalmer: I actually like that idea. up_the_irons: wait, so... wut?
:)
i got lost at "lvm thing on it" -: jpalmer hushes and lets ryk explain
ryk gets nervous up_the_irons: i could mount a remote storage volume to a vps, yeah. say, iscsi, or even AoE. but iscsi seems to be the thing these days. ryk: um, i meant
regarding cust-a vs cust-b
nevermind. :( -: up_the_irons listens ryk: not even sure what i was going to say there.
i had something in my head about permissions up_the_irons: jpalmer: ok, so what was your interpretation of what ryk said? :)
ryk: i did the rsync thing in the very beginning, but them permissions became an issues, and i can't remember why i didn't pursue it, but i just didn't... ryk: ok here's what i was thinking, i think
currently, isn't it permissions that keep the VM's disk images separate?
does every VM run under a different user? jpalmer: up_the_irons: well, I was thinking along the lines of something like.. a password protected NFS, SMB, FTP whatever share for each client. The client can mount it locally, and write their own backups scripts to utilize it. you already have the users account username/password for the portal.. you could potentially tie those togather. up_the_irons: if permissions are solved, i think iteration 1 for backups could simply be a chunk of storage on a backup host and give the customer a way to rsync to it jpalmer: up_the_irons: but, if you're going to go to that length.. then you're basically just providing them with additional storage that they could utilize for things other than backups. up_the_irons: ryk: disk images are separate just by volume-A being assign to VM-A, and nothing else ryk: jpalmer: does it matter tough?
they're paying for it up_the_irons: jpalmer: since the backup space is a paid service, i may not care what one uses it for :) jpalmer: ok, in that case it's a moot point. up_the_irons: it will be "low end" space, like very large volumes, slow disks, meant for backups. it doesn't need to be fast, just available when needed. ryk: up_the_irons: so mount the whole remote volume to the host, create logical lvm volumes for each VM of a paying customer, and assign that as a block device directly to the VM, is that possible? up_the_irons: jpalmer: why NFS, SMB, or FTP, when it could simply be a user account on a host with large disks? then it's like "rsync -ave ssh / backup-host:/home/foo" jpalmer: up_the_irons: I was simply using those as examples. you can use whatever transport or protocol you think fits your environment better. ryk: telnet up_the_irons: jpalmer: one thing that comes to mind: FreeBSD / OpenBSD / etc... rsync to Linux host. would be best if native.
ryk: LOL ryk: oh wait, that's not a transport protocol
then again he is talking about layer 7
jpalmer: y u confuse me up_the_irons: ryk: mount remote volume to local VM, yes, that would be possible. I would just have to figure out the medium, like iSCSI or AoE, etc... ryk: well what i am suggesting
to simplify it to the end-customer, so that they don't have to worry about iSCSI up_the_irons: ryk: the remote volume would be 'seen' as a regular disk and thus could be formatted natively ryk: is to mount it iSCSI yourself on your host jpalmer: I think we're all 3 saying the same thing, in different ways :) up_the_irons: ryk: yeah that's what I meant, iSCSI on the host ryk: aye up_the_irons: jpalmer: well, the rsync idea was a little different :)
that wouldn't require iSCSI, or remotely mounted volumes, which takes out a lot of complexity and thus is attractive ryk: i think either would work, there are probably pros and cons to both but i would be happy with either jpalmer: essentially, we've all three discussed: san -> mounted to host vm server -> carved up and mounted to individual vms (in a nutshell) up_the_irons: jpalmer: yeah ryk: the other option being basically a box where everyone has a shell account and a lot of quota jpalmer: the rsync server idea has merit. also. up_the_irons: if anyone has any experience with SANs, I'm all ears, cuz I've mostly stayed away from them ryk: that one sounds like more trouble though when you consider you're now essentially managing a shell service. up_the_irons: ryk: jpalmer : yeah, shell account with large quota is dead simple. i like dead simple. ryk: up_the_irons: i only have some experience with freenas
that would be perfectly sufficient jpalmer: are there any rsync-only shells? not sure how you'd make it so the account can *only* be used for rsync and nothing else. ryk: i mount my esxi boxes to freenas via nfs and iscsi. would work for unbutu as well.
in fact the freenas box is used exclusively for backup
jpalmer: there is one, rsync.net and htey do just that. up_the_irons: ryk: well, it is possible to only allow rsync to shell in, and nothing else. so managing shell service is not _too_ big a deal. I already manage shells with console.cust.arpnetworks.com, and I have that all automated at this point (SSH pub key submission, etc...) -: jpalmer is familiar with rsync.net. I mean, a shell as in rbash-esque up_the_irons: jpalmer: you can do an rsync only shell like this in .ssh/authorized_keys: command="/foo/validate-rsync.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <key> jpalmer: oh, true.
heh, I was thinking of rbash and such. forgot that option to ssh. up_the_irons: [FBI]: off ***: [FBI] has left
[FBI] starts logging #arpnetworks at Wed Jun 06 18:41:18 2012
[FBI] has joined #arpnetworks
swajr has quit IRC (Ping timeout: 240 seconds)
HighJinx has quit IRC (Quit: Computer has gone to sleep.) up_the_irons: ryk: jpalmer : i summarized what we talked about: https://gist.github.com/9919b9686e36bf7592c0
feel free to edit, fork, w/e... milki: lol backups ***: HighJinx has joined #arpnetworks up_the_irons: HighJinx: you played with the E3 or E5 CPUs at all?
The E3's have great reviews and don't burn down the bank HighJinx: i havent left the core series yet up_the_irons up_the_irons: HighJinx: gotcha HighJinx: the new ivybridge based stuff is supposed to be pretty good tho up_the_irons: HighJinx: yeah i'm looking at that stuff too
the E3-1230 V2 at just 68W TDP is very attractive HighJinx: damn ***: zeshoem has joined #arpnetworks
zeeshoem has joined #arpnetworks
zeeshoem has quit IRC (Client Quit)