is a "cloud dedicated server" just a regular dedicated server with the word cloud at the front or is it something else? it's dedicated resources server in the cloud TO THE CLOUD!!! it's been raining here a lot... mkb: there's a link near the bottom that explains in detail how they work https://arpnetworks.com/news/2017/02/28/arp-networks-launches-arp-thunder-cloud-dedicated-servers.html and no, it's not just the word cloud in front "THEY [BMC chips] ALL SUCK" yes they do yes :) ARP Thunder does sound pretty dang cool You've got me evaluating whether/what I can migrate to it. up_the_irons: Is there any significant difference between Thunder and VPSs running on the Ceph storage cluster, besides Thunder's increased RAM/CPU and SSD storage? Are they otherwise "just virtual machines" with better specs? PS up_the_irons - the Thunder page makes no mention of bandwidth quota, while VPS and Metal pages do. Does that mean Thunder has unlimited monthly bandwidth? :P they run on different hosts. hosts that can accommodate dedicating hardware (e.g. CPU cores). and also they enable the vmx bit, so you can run VMs of your own (just like a bare metal dedicated server would) yeah aware of the bandwidth issue; it's on the order page though i just haven't got around to making that 4th row, and my css skills are lacking... Ah so it is there. Very cool. Very tempting... i reckon arp thunder is cool too :) up_the_irons: To your knowledge, has anyone run ZFS on Ceph block devices? Are there any known "gotchas" or the like? brycec: not to my knowledge Excellent i would think it would be fine. it looks like any ol' block device to the OS. up_the_irons: Are the SSD and HDD storage presented as separate block devices? yup s/HDD/"SATA"/ up_the_irons: Are the SSD and "SATA" storage presented as separate block devices? they are different Ceph pools How are storage upgrades handled, the block device just becomes "larger"? yup Very compelling stuff it's one of my favorite features. someone wants a bigger disk, and it's a command we type. No more data center trip. Seems like I could migrate from a Metal to a Large and save some $ RAM for RAM, HD for HD, it's a little cheaper than Metal And the backend data redundancy is the icing on the cake Also the flexibility of having SSD _and_ SATA space (for those of us on the blades that only take 2 drives, period) right exactly use cases i would _not_ recommend: very high bandwidth, really high disk IOPS (having your own dedicated disks, all to yourself, is better in this case) Both good points. not having to deal with java is a huge + too lol don't underestimate it :) oh yes, cannot emphasize that enough ;) On the other hand I'll miss ipmitool and the feeling of reliability that comes with running on bare-metal. maybe you should just do both :) Why mercutio it's as if you're benefitting from the upsell ;p hah The only thing I have to figure out is how much storage I need / how much I'm currently using and can be shaved-off. (I'm coming from RAID-1 1TB, so 500GB is a bit of a step-down, but how much am I _really_ using?) brycec: you can get a 1TB volume, it's on the order form under "Extra Disk" I know :) bryce is trying to save money i think and hell, just saying, if you want a 5TB or 10TB volume, just say the word ;) ah ok lol it's all just $ Also, since I intend to run ZFS on it, shrinking is not an option (and growing is... problematic) zfs grows fine i think So I've gotta get it "right". That said, identical Thunder specs to my current Metal are just $15/mo less well i know :) think if you need the SSD disk then, b/c (while it's not listed yet), i was thinking of allowing people to "opt out" of either the SSD disk or the SATA disk and get a discount. i mean, why pay if you're not gonna use it. but yeah shrinking ain't so good mercutio: It has to be done in a matching matter up_the_irons: I will be quite happy to have /some/ SSD (for ZFS L2ARC/cache), but I appreciate the consideration and I recommend extending it to thers. *others OK :) Now if I could downgrade the SSD to a lesser amount of storage for a "credit" towards SATA... but I think it's just getting "complicated" at that point and I don't want to make things too complicated for your side why not stick the OS on oen partition of ssd adn zfs on the other actually i'd just create two zfs pools i kind of wish zfs could auto balance between two zfs pools of different speeds, or have disks with different speeds that are balanced mercutio: And I would stick the OS on the SSD, sure. But I don't need 200GB of SSD for OS+L2ARC :p database? Not currently running anything that "heavy" i don't know what you're doing (The database server I do have running can keep everything in RAM anyways) but my current thinking is that ssd is good for generic stuff and hard-disk is good for archive/backup What aren't I doing? :P (Serving porn, among many things. But nobody really cares.) I'm tentatively thinking I'd put my running VM images (yes, I'll be running Proxmox) on the SSD and backups on the SATA. And this is where figuring out where exactly I'm using space comes into play. (Answer: about 70% of my current usage are in ZFS snapshots anyways) We'd be happy to trade SSD for SATA "credit". I mean, why not be flexible now that we can. that's part of the excitement of all this new tech :) Thanks up_the_irons :) (I think I might be most tempted by the flexibility of increasing RAM "on the fly") and that's another "command" ;) To a certain extent, storage could always be added in various ways (i.e. Another VPS exporting a partition over iSCSI) but RAM was really, truly a "visit the datacenter" task. And once you hit the max supported for your machine, that was that. yeah that's the truth Out of curiosity, what kind of hardware makes up the Ceph cluster? Are you "recycling" old kvr host machines? Or is this fancy new stuff? The SSD side are the HP Proliant 25-bay units The SATA side has some recycled old kvr hosts, but I'm stopping that because a 12-bay DL180 G6/7 is looking like a better value Ooooh. Ahhh. What about the hosts for Thunder? as long as I can get the fucking LSI 9211's to play nice with the G6. If you follow our Instagram, you'll see my frustration ;) Thunder hosts are just our regular E3 and E5's, the same ones we use for Metal Okay cool. In fact, they are all old Metal hosts Every time a customer migrates from Metal to Thunder, you get another host machine to use, eh? why yes :) And here I was slightly worried I was "sticking you with" the old machine people still order the Metal's, so it's OK. I'll sell them either way. Thanks for the Q&A up_the_irons. I'm definitely sold. Just a matter of figuring out _what_ I need, and a migration plan. (And thanks to mercutio too, of course) no problem :) (I'm even debating going from Metal to 3*Small Thunders so I can break one host and not affect the other two.) sounds like a storm (*Starter, not Small. My bad) lol yeah that's the kind of thing that something like this makes easier to do Indeed. Harder to manage though, and less efficient "savings" but it might be worth it. Or a pair of "mediums" for that matter. maybe two mediums and a starter? I only have a handful of VMs I'm hosting anyways, it's not like it's the end of the world if they're unreachable. But I'm intrigued at the "modularity" of it all. I am curious, if metal boxes are being repurposed as Thunder boxes, whats the speed of the storage network, is it 1gig ethernet, 10 gig ethernet or some kind of infiniband? nathani: you asked me this before ;) 40 Gbps Infiniband up_the_irons: I didn't realize the metal dedicated boxes had infiniband as well, or is this something you are installing to convert them into thunder boxes? infiniband is awesome :) it should be everywhere 1gigabit network for file storage would give pretty mediocre performance on top of that, there'd be a lot less performance stability as there would be issues like congestion etc from other users. the cool thing about having a huge amount of bandwidth available is that it doesn't really bottleneck (of course the bottleneck can move elsewhere)