↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When | |
---|---|---|---|
*** | awyeah has quit IRC (Quit: ZNC - http://znc.in) | [03:12] | |
awyeah has joined #arpnetworks | [03:18] | ||
awyeah has quit IRC (Quit: ZNC - http://znc.in) | [03:25] | ||
awyeah has joined #arpnetworks | [03:32] | ||
................................... (idle for 2h52mn) | |||
dj_goku has quit IRC (Remote host closed the connection) | [06:24] | ||
dj_goku has joined #arpnetworks
dj_goku has quit IRC (Changing host) dj_goku has joined #arpnetworks | [06:35] | ||
................................................ (idle for 3h56mn) | |||
mkb | is a "cloud dedicated server" just a regular dedicated server with the word cloud at the front or is it something else? | [10:31] | |
mercutio | it's dedicated resources server in the cloud | [10:35] | |
BryceBot | TO THE CLOUD!!! | [10:35] | |
mercutio | it's been raining here a lot... | [10:36] | |
......... (idle for 43mn) | |||
up_the_irons | mkb: there's a link near the bottom that explains in detail how they work
https://arpnetworks.com/news/2017/02/28/arp-networks-launches-arp-thunder-cloud-dedicated-servers.html and no, it's not just the word cloud in front | [11:19] | |
mkb | "THEY [BMC chips] ALL SUCK"
yes they do | [11:22] | |
up_the_irons | yes :) | [11:23] | |
................................ (idle for 2h36mn) | |||
*** | km_ has quit IRC (Remote host closed the connection) | [13:59] | |
..... (idle for 23mn) | |||
brycec | ARP Thunder does sound pretty dang cool
You've got me evaluating whether/what I can migrate to it. up_the_irons: Is there any significant difference between Thunder and VPSs running on the Ceph storage cluster, besides Thunder's increased RAM/CPU and SSD storage? Are they otherwise "just virtual machines" with better specs? PS up_the_irons - the Thunder page makes no mention of bandwidth quota, while VPS and Metal pages do. Does that mean Thunder has unlimited monthly bandwidth? :P | [14:22] | |
up_the_irons | they run on different hosts. hosts that can accommodate dedicating hardware (e.g. CPU cores). and also they enable the vmx bit, so you can run VMs of your own (just like a bare metal dedicated server would)
yeah aware of the bandwidth issue; it's on the order page though i just haven't got around to making that 4th row, and my css skills are lacking... | [14:26] | |
brycec | Ah so it is there. Very cool. Very tempting... | [14:27] | |
.... (idle for 17mn) | |||
mercutio | i reckon arp thunder is cool too :) | [14:44] | |
...... (idle for 26mn) | |||
brycec | up_the_irons: To your knowledge, has anyone run ZFS on Ceph block devices? Are there any known "gotchas" or the like? | [15:10] | |
..... (idle for 22mn) | |||
*** | forgotten has quit IRC (Ping timeout: 255 seconds) | [15:32] | |
...... (idle for 25mn) | |||
up_the_irons | brycec: not to my knowledge | [15:57] | |
brycec | Excellent | [15:57] | |
up_the_irons | i would think it would be fine. it looks like any ol' block device to the OS. | [15:58] | |
brycec | up_the_irons: Are the SSD and HDD storage presented as separate block devices? | [15:58] | |
up_the_irons | yup | [15:58] | |
brycec | s/HDD/"SATA"/ | [15:58] | |
BryceBot | <brycec> up_the_irons: Are the SSD and "SATA" storage presented as separate block devices? | [15:58] | |
up_the_irons | they are different Ceph pools | [15:58] | |
brycec | How are storage upgrades handled, the block device just becomes "larger"? | [15:59] | |
up_the_irons | yup | [15:59] | |
brycec | Very compelling stuff | [15:59] | |
*** | km_ has joined #arpnetworks | [15:59] | |
up_the_irons | it's one of my favorite features. someone wants a bigger disk, and it's a command we type. No more data center trip. | [16:00] | |
brycec | Seems like I could migrate from a Metal to a Large and save some $ | [16:00] | |
up_the_irons | RAM for RAM, HD for HD, it's a little cheaper than Metal | [16:00] | |
brycec | And the backend data redundancy is the icing on the cake
| [16:00] | |
up_the_irons | right exactly
use cases i would _not_ recommend: very high bandwidth, really high disk IOPS (having your own dedicated disks, all to yourself, is better in this case) | [16:01] | |
brycec | Both good points. | [16:03] | |
mercutio | not having to deal with java is a huge + too | [16:03] | |
brycec | lol | [16:03] | |
mercutio | don't underestimate it :) | [16:03] | |
up_the_irons | oh yes, cannot emphasize that enough ;) | [16:04] | |
brycec | On the other hand I'll miss ipmitool and the feeling of reliability that comes with running on bare-metal. | [16:04] | |
mercutio | maybe you should just do both :) | [16:04] | |
brycec | Why mercutio it's as if you're benefitting from the upsell ;p | [16:05] | |
mercutio | hah | [16:05] | |
brycec | The only thing I have to figure out is how much storage I need / how much I'm currently using and can be shaved-off.
(I'm coming from RAID-1 1TB, so 500GB is a bit of a step-down, but how much am I _really_ using?) | [16:05] | |
up_the_irons | brycec: you can get a 1TB volume, it's on the order form under "Extra Disk" | [16:06] | |
brycec | I know :) | [16:06] | |
mercutio | bryce is trying to save money i think | [16:06] | |
up_the_irons | and hell, just saying, if you want a 5TB or 10TB volume, just say the word ;)
ah ok | [16:07] | |
brycec | lol it's all just $
Also, since I intend to run ZFS on it, shrinking is not an option (and growing is... problematic) | [16:07] | |
mercutio | zfs grows fine i think | [16:07] | |
brycec | So I've gotta get it "right". That said, identical Thunder specs to my current Metal are just $15/mo less | [16:08] | |
mercutio | well i know :) | [16:08] | |
up_the_irons | think if you need the SSD disk then, b/c (while it's not listed yet), i was thinking of allowing people to "opt out" of either the SSD disk or the SATA disk and get a discount. i mean, why pay if you're not gonna use it. | [16:08] | |
mercutio | but yeah shrinking ain't so good | [16:08] | |
brycec | mercutio: It has to be done in a matching matter
up_the_irons: I will be quite happy to have /some/ SSD (for ZFS L2ARC/cache), but I appreciate the consideration and I recommend extending it to thers. *others | [16:08] | |
up_the_irons | OK :) | [16:09] | |
brycec | Now if I could downgrade the SSD to a lesser amount of storage for a "credit" towards SATA... but I think it's just getting "complicated" at that point and I don't want to make things too complicated for your side | [16:10] | |
mercutio | why not stick the OS on oen partition of ssd
adn zfs on the other actually i'd just create two zfs pools i kind of wish zfs could auto balance between two zfs pools of different speeds, or have disks with different speeds that are balanced | [16:11] | |
brycec | mercutio: And I would stick the OS on the SSD, sure. But I don't need 200GB of SSD for OS+L2ARC :p | [16:12] | |
mercutio | database? | [16:12] | |
brycec | Not currently running anything that "heavy" | [16:12] | |
mercutio | i don't know what you're doing | [16:12] | |
brycec | (The database server I do have running can keep everything in RAM anyways) | [16:13] | |
mercutio | but my current thinking is that ssd is good for generic stuff
and hard-disk is good for archive/backup | [16:13] | |
brycec | What aren't I doing? :P (Serving porn, among many things. But nobody really cares.)
I'm tentatively thinking I'd put my running VM images (yes, I'll be running Proxmox) on the SSD and backups on the SATA. And this is where figuring out where exactly I'm using space comes into play. (Answer: about 70% of my current usage are in ZFS snapshots anyways) | [16:13] | |
up_the_irons | We'd be happy to trade SSD for SATA "credit". I mean, why not be flexible now that we can. that's part of the excitement of all this new tech :) | [16:14] | |
brycec | Thanks up_the_irons :)
(I think I might be most tempted by the flexibility of increasing RAM "on the fly") | [16:15] | |
up_the_irons | and that's another "command" ;) | [16:16] | |
brycec | To a certain extent, storage could always be added in various ways (i.e. Another VPS exporting a partition over iSCSI) but RAM was really, truly a "visit the datacenter" task. And once you hit the max supported for your machine, that was that. | [16:17] | |
up_the_irons | yeah that's the truth | [16:18] | |
brycec | Out of curiosity, what kind of hardware makes up the Ceph cluster? Are you "recycling" old kvr host machines? Or is this fancy new stuff? | [16:19] | |
up_the_irons | The SSD side are the HP Proliant 25-bay units
The SATA side has some recycled old kvr hosts, but I'm stopping that because a 12-bay DL180 G6/7 is looking like a better value | [16:20] | |
brycec | Ooooh. Ahhh. What about the hosts for Thunder? | [16:21] | |
up_the_irons | as long as I can get the fucking LSI 9211's to play nice with the G6. If you follow our Instagram, you'll see my frustration ;)
Thunder hosts are just our regular E3 and E5's, the same ones we use for Metal | [16:21] | |
brycec | Okay cool. | [16:22] | |
up_the_irons | In fact, they are all old Metal hosts | [16:22] | |
brycec | Every time a customer migrates from Metal to Thunder, you get another host machine to use, eh? | [16:23] | |
up_the_irons | why yes :) | [16:23] | |
brycec | And here I was slightly worried I was "sticking you with" the old machine | [16:23] | |
up_the_irons | people still order the Metal's, so it's OK. I'll sell them either way. | [16:24] | |
brycec | Thanks for the Q&A up_the_irons. I'm definitely sold. Just a matter of figuring out _what_ I need, and a migration plan.
(And thanks to mercutio too, of course) | [16:30] | |
up_the_irons | no problem :) | [16:30] | |
brycec | (I'm even debating going from Metal to 3*Small Thunders so I can break one host and not affect the other two.) | [16:30] | |
mercutio | sounds like a storm | [16:31] | |
brycec | (*Starter, not Small. My bad) lol | [16:31] | |
mercutio | yeah that's the kind of thing that something like this makes easier to do | [16:31] | |
brycec | Indeed. Harder to manage though, and less efficient "savings" but it might be worth it. Or a pair of "mediums" for that matter. | [16:33] | |
mercutio | maybe two mediums and a starter? | [16:33] | |
brycec | I only have a handful of VMs I'm hosting anyways, it's not like it's the end of the world if they're unreachable. But I'm intrigued at the "modularity" of it all. | [16:34] | |
..................... (idle for 1h42mn) | |||
nathani | I am curious, if metal boxes are being repurposed as Thunder boxes, whats the speed of the storage network, is it 1gig ethernet, 10 gig ethernet or some kind of infiniband? | [18:16] | |
........... (idle for 50mn) | |||
up_the_irons | nathani: you asked me this before ;)
40 Gbps Infiniband | [19:06] | |
............. (idle for 1h1mn) | |||
nathani | up_the_irons: I didn't realize the metal dedicated boxes had infiniband as well, or is this something you are installing to convert them into thunder boxes? | [20:07] | |
mercutio | infiniband is awesome :)
it should be everywhere 1gigabit network for file storage would give pretty mediocre performance on top of that, there'd be a lot less performance stability as there would be issues like congestion etc from other users. the cool thing about having a huge amount of bandwidth available is that it doesn't really bottleneck (of course the bottleneck can move elsewhere) | [20:14] | |
............. (idle for 1h2mn) | |||
*** | awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks | [21:18] | |
................. (idle for 1h23mn) | |||
karstensrage has quit IRC (Ping timeout: 246 seconds)
Guest39686 has joined #arpnetworks | [22:43] | ||
........ (idle for 35mn) | |||
Guest39686 has quit IRC (Ping timeout: 246 seconds)
karstensrage has joined #arpnetworks karstensrage is now known as Guest2979 | [23:21] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |