***: awyeah has joined #arpnetworks
awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks
dj_goku has quit IRC (Remote host closed the connection)
dj_goku has joined #arpnetworks
dj_goku has quit IRC (Changing host)
dj_goku has joined #arpnetworks
mkb: is a "cloud dedicated server" just a regular dedicated server with the word cloud at the front or is it something else?
mercutio: it's dedicated resources server in the cloud
BryceBot: TO THE CLOUD!!!
mercutio: it's been raining here a lot...
up_the_irons: mkb: there's a link near the bottom that explains in detail how they work
https://arpnetworks.com/news/2017/02/28/arp-networks-launches-arp-thunder-cloud-dedicated-servers.html
and no, it's not just the word cloud in front
mkb: "THEY [BMC chips] ALL SUCK"
yes they do
up_the_irons: yes :)
***: km_ has quit IRC (Remote host closed the connection)
brycec: ARP Thunder does sound pretty dang cool
You've got me evaluating whether/what I can migrate to it.
up_the_irons: Is there any significant difference between Thunder and VPSs running on the Ceph storage cluster, besides Thunder's increased RAM/CPU and SSD storage? Are they otherwise "just virtual machines" with better specs?
PS up_the_irons - the Thunder page makes no mention of bandwidth quota, while VPS and Metal pages do. Does that mean Thunder has unlimited monthly bandwidth? :P
up_the_irons: they run on different hosts. hosts that can accommodate dedicating hardware (e.g. CPU cores). and also they enable the vmx bit, so you can run VMs of your own (just like a bare metal dedicated server would)
yeah aware of the bandwidth issue; it's on the order page though
i just haven't got around to making that 4th row, and my css skills are lacking...
brycec: Ah so it is there. Very cool. Very tempting...
mercutio: i reckon arp thunder is cool too :)
brycec: up_the_irons: To your knowledge, has anyone run ZFS on Ceph block devices? Are there any known "gotchas" or the like?
***: forgotten has quit IRC (Ping timeout: 255 seconds)
up_the_irons: brycec: not to my knowledge
brycec: Excellent
up_the_irons: i would think it would be fine. it looks like any ol' block device to the OS.
brycec: up_the_irons: Are the SSD and HDD storage presented as separate block devices?
up_the_irons: yup
brycec: s/HDD/"SATA"/
BryceBot: <brycec> up_the_irons: Are the SSD and "SATA" storage presented as separate block devices?
up_the_irons: they are different Ceph pools
brycec: How are storage upgrades handled, the block device just becomes "larger"?
up_the_irons: yup
brycec: Very compelling stuff
***: km_ has joined #arpnetworks
up_the_irons: it's one of my favorite features. someone wants a bigger disk, and it's a command we type. No more data center trip.
brycec: Seems like I could migrate from a Metal to a Large and save some $
up_the_irons: RAM for RAM, HD for HD, it's a little cheaper than Metal
brycec: And the backend data redundancy is the icing on the cake
Also the flexibility of having SSD _and_ SATA space (for those of us on the blades that only take 2 drives, period)
up_the_irons: right exactly
use cases i would _not_ recommend: very high bandwidth, really high disk IOPS (having your own dedicated disks, all to yourself, is better in this case)
brycec: Both good points.
mercutio: not having to deal with java is a huge + too
brycec: lol
mercutio: don't underestimate it :)
up_the_irons: oh yes, cannot emphasize that enough ;)
brycec: On the other hand I'll miss ipmitool and the feeling of reliability that comes with running on bare-metal.
mercutio: maybe you should just do both :)
brycec: Why mercutio it's as if you're benefitting from the upsell ;p
mercutio: hah
brycec: The only thing I have to figure out is how much storage I need / how much I'm currently using and can be shaved-off.
(I'm coming from RAID-1 1TB, so 500GB is a bit of a step-down, but how much am I _really_ using?)
up_the_irons: brycec: you can get a 1TB volume, it's on the order form under "Extra Disk"
brycec: I know :)
mercutio: bryce is trying to save money i think
up_the_irons: and hell, just saying, if you want a 5TB or 10TB volume, just say the word ;)
ah ok
brycec: lol it's all just $
Also, since I intend to run ZFS on it, shrinking is not an option (and growing is... problematic)
mercutio: zfs grows fine i think
brycec: So I've gotta get it "right". That said, identical Thunder specs to my current Metal are just $15/mo less
mercutio: well i know :)
up_the_irons: think if you need the SSD disk then, b/c (while it's not listed yet), i was thinking of allowing people to "opt out" of either the SSD disk or the SATA disk and get a discount. i mean, why pay if you're not gonna use it.
mercutio: but yeah shrinking ain't so good
brycec: mercutio: It has to be done in a matching matter
up_the_irons: I will be quite happy to have /some/ SSD (for ZFS L2ARC/cache), but I appreciate the consideration and I recommend extending it to thers.
*others
up_the_irons: OK :)
brycec: Now if I could downgrade the SSD to a lesser amount of storage for a "credit" towards SATA... but I think it's just getting "complicated" at that point and I don't want to make things too complicated for your side
mercutio: why not stick the OS on oen partition of ssd
adn zfs on the other
actually
i'd just create two zfs pools
i kind of wish zfs could auto balance between two zfs pools of different speeds, or have disks with different speeds that are balanced
brycec: mercutio: And I would stick the OS on the SSD, sure. But I don't need 200GB of SSD for OS+L2ARC :p
mercutio: database?
brycec: Not currently running anything that "heavy"
mercutio: i don't know what you're doing
brycec: (The database server I do have running can keep everything in RAM anyways)
mercutio: but my current thinking is that ssd is good for generic stuff
and hard-disk is good for archive/backup
brycec: What aren't I doing? :P (Serving porn, among many things. But nobody really cares.)
I'm tentatively thinking I'd put my running VM images (yes, I'll be running Proxmox) on the SSD and backups on the SATA. And this is where figuring out where exactly I'm using space comes into play.
(Answer: about 70% of my current usage are in ZFS snapshots anyways)
up_the_irons: We'd be happy to trade SSD for SATA "credit". I mean, why not be flexible now that we can. that's part of the excitement of all this new tech :)
brycec: Thanks up_the_irons :)
(I think I might be most tempted by the flexibility of increasing RAM "on the fly")
up_the_irons: and that's another "command" ;)
brycec: To a certain extent, storage could always be added in various ways (i.e. Another VPS exporting a partition over iSCSI) but RAM was really, truly a "visit the datacenter" task. And once you hit the max supported for your machine, that was that.
up_the_irons: yeah that's the truth
brycec: Out of curiosity, what kind of hardware makes up the Ceph cluster? Are you "recycling" old kvr host machines? Or is this fancy new stuff?
up_the_irons: The SSD side are the HP Proliant 25-bay units
The SATA side has some recycled old kvr hosts, but I'm stopping that because a 12-bay DL180 G6/7 is looking like a better value
brycec: Ooooh. Ahhh. What about the hosts for Thunder?
up_the_irons: as long as I can get the fucking LSI 9211's to play nice with the G6. If you follow our Instagram, you'll see my frustration ;)
Thunder hosts are just our regular E3 and E5's, the same ones we use for Metal
brycec: Okay cool.
up_the_irons: In fact, they are all old Metal hosts
brycec: Every time a customer migrates from Metal to Thunder, you get another host machine to use, eh?
up_the_irons: why yes :)
brycec: And here I was slightly worried I was "sticking you with" the old machine
up_the_irons: people still order the Metal's, so it's OK. I'll sell them either way.
brycec: Thanks for the Q&A up_the_irons. I'm definitely sold. Just a matter of figuring out _what_ I need, and a migration plan.
(And thanks to mercutio too, of course)
up_the_irons: no problem :)
brycec: (I'm even debating going from Metal to 3*Small Thunders so I can break one host and not affect the other two.)
mercutio: sounds like a storm
brycec: (*Starter, not Small. My bad) lol
mercutio: yeah that's the kind of thing that something like this makes easier to do
brycec: Indeed. Harder to manage though, and less efficient "savings" but it might be worth it. Or a pair of "mediums" for that matter.
mercutio: maybe two mediums and a starter?
brycec: I only have a handful of VMs I'm hosting anyways, it's not like it's the end of the world if they're unreachable. But I'm intrigued at the "modularity" of it all.
nathani: I am curious, if metal boxes are being repurposed as Thunder boxes, whats the speed of the storage network, is it 1gig ethernet, 10 gig ethernet or some kind of infiniband?
up_the_irons: nathani: you asked me this before ;)
40 Gbps Infiniband
nathani: up_the_irons: I didn't realize the metal dedicated boxes had infiniband as well, or is this something you are installing to convert them into thunder boxes?
mercutio: infiniband is awesome :)
it should be everywhere
1gigabit network for file storage would give pretty mediocre performance
on top of that, there'd be a lot less performance stability
as there would be issues like congestion etc from other users. the cool thing about having a huge amount of bandwidth available is that it doesn't really bottleneck
(of course the bottleneck can move elsewhere)
***: awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks
karstensrage has quit IRC (Ping timeout: 246 seconds)
Guest39686 has joined #arpnetworks
Guest39686 has quit IRC (Ping timeout: 246 seconds)
karstensrage has joined #arpnetworks
karstensrage is now known as Guest2979