[01:15] *** up_the_irons has quit IRC (Ping timeout: 268 seconds) [02:01] *** kleszcz has joined #arpnetworks [09:34] *** ziyourenxiang__ has quit IRC (Ping timeout: 246 seconds) [14:23] *** tmarble has joined #arpnetworks [14:23] I can't get into the portal... can anyone help? [14:59] *** up_the_irons has joined #arpnetworks [14:59] *** ChanServ sets mode: +o up_the_irons [14:59] wonder why I was booted... [15:00] up_the_irons: can you help me get back into the portal? [15:01] tmarble: just sent you a response to your ticket [15:01] tmarble: in fact, you should have a password reset email in your Inbox [15:01] I generated one just now [15:02] yep, trying it [15:03] yea, i'm in! [15:11] up_the_irons: thx! (wondering how you can do the live VM migrations??) [15:19] live migrations mean copying memory contents over network [15:19] so basically it just looks at the state of your vm, and recreates that on another host [15:20] the storage itself can be accessed from multiple locations so doesn't need to be replicated [15:20] that's crazy! is it with lxc, virtbox, xen or something else? [15:20] kvm/qemu [15:20] nice! [15:21] backend is ceph [15:21] i.e. you kind run "swap" to ceph and let it do the "network distribution" for you? [15:21] .. but page out everything [15:21] ceph stores the whole block device [15:21] and distributes it across multiple servers [15:21] oh.. the disk sure [15:21] so you can lose one server and keep going etc [15:21] how about memory? [15:22] but it also means that you can join the network between two locations [15:22] and access the same storage from another location [15:22] and storage can be moved from one location to another with no down time too [15:23] memory it basically justs read in all of the memory and sends it to the other host [15:23] so literally there's only a couple of seconds (to risk of transactions in flight) to pivot the "master"? [15:23] that can take a few seconds, so after it's done that it copies any new changes [15:24] is that something qemu supports out-of-the-box? [15:24] well it's like swap, it'll trap all new memory accesses [15:24] so it knows to send them [15:24] umm sort of [15:24] it's out of the box.. but ceph, etc is reasonably complicated to setup [15:25] understood [15:25] you need at least 3 or 4 storage hosts to run ceph storage [15:25] but it scales out nicely [15:26] it's the free/open source equiv of S3 :) [15:26] but it means that very small setups aren't really used with it [15:26] ceph has a few things, an object store like s3 is one of them [15:26] it also has rbd which we're using, and a file system [15:27] rbd is basically optimised for block device storage [15:27] and supports useful things that cater to that like resizing the block device [15:27] cool! [16:00] *** ziyourenxiang__ has joined #arpnetworks [17:18] Whats the network capacity / link between the 2 cages? And is it routed via the Meet Me Room, or direct cage to cage? [20:02] *** hive-mind has quit IRC (Ping timeout: 268 seconds) [20:07] *** hive-mind has joined #arpnetworks