***: jcv has joined #arpnetworks
vom has joined #arpnetworks vom: hello all - anyone seeing disk latency on vps hosts ?
i had a kernel panic in the wee hours from the disk "falling out"
and now im back up - but IO is abyssmal
this is using sysbench
Read 32.812Mb Written 21.875Mb Total transferred 54.688Mb (186.26Kb/sec) ***: incin has joined #arpnetworks johnny-o: vom: out of curiosity, is that under openbsd? vom: johnny-o: its not - its ubuntu server 16.04 - kernel 4.4.0
i had the kernel panic a few weeks ago - but not the slow down
this morning i got both
the server is very lightly loaded - ram is never an issue johnny-o: is that rand or seq r/w? just curious, going to try the same thing on my openbsd host vom: here's my cmdline ive been playing with this morning
sysbench --test=fileio --file-total-size=2G prepare
sysbench --test=fileio --file-total-size=2G --file-test-mode=rndrw --max-time=600 --max-requests=0 run
sysbench --test=fileio --file-total-size=2G cleanup
so to answer your question - looks like random read/write johnny-o: i began having disk i/o performance issues after migrating to a new host, so i thought maybe it was unique to openbsd vom: ive never ran sysbench before - so im running it on some local bare metal boxes too just to get some more data points
my vm was migrated a few months ago i think - and (huge grain of salt here) - i never had disk issues like this previous johnny-o: i didn't report anything official, but mentioned something briefly here and mercutio mentioned a workaround
hmm, interesting
i guess i don't know enough about the underlying host configuration changes, but i also know i had not a single issue on the old host ***: ziyourenxiang has quit IRC (Ping timeout: 248 seconds) brycec: FWIW I frequently see high load averages overnight in the wee hours. My VPS itself is doing anything more or less at the time, just seems that disk IO slows down. (my backups run ~12 hours later) wee hours = 01:40 - 01:50 ARP standard time (PDT, GMT-7)
(It rebounds though. Don't know what your ongoing issue may be, sorry) johnny-o: yeah, mine is much more intermittent. i'm compiling as much information as i can to speak to it in a more useful manner, but i can't seem to force reproducibility and repeatability brycec: For comparison, results from my busy VPS' SSD-backed Ceph pool: Read 827.95Mb Written 551.97Mb Total transferred 1.3476Gb (2.2975Mb/sec) 147.04 Requests/sec executed
For the spinning-rust volume (less busy, at least by my VPS): Read 164.06Mb Written 109.38Mb Total transferred 273.44Mb (466.01Kb/sec) 29.13 Requests/sec executed
I wouldn't say I notice any significant latency though. Not currently, as of those sysbench runs, anyways. toddf: consider a lot of vm's with daily cronjobs roughly the same time of day ... ***: dferris__ has joined #arpnetworks
dferris has quit IRC (*.net *.split) vom: my performance seems to have slowly gotten better as the day has gone on...
Read 289.69Mb Written 193.12Mb Total transferred 482.81Mb (1.5972Mb/sec)
thats about 10x what it was running first thing this morning
i don't mean this as an accusation at all - but could there be resource (storage) contention on the host ?
and if a good chunk of VMs are running various crons at roughly the same time, its running too hot and timeouts and hangs are manifesting at the vm level ? johnny-o: that's closer to what i've seen during a couple of runs at random times throughout the day using your sysbench params
i have no idea if i can systematically detect what i'm seeing, as it may be abstracted away
seems to manifest itself in annoying interactive ways, like deleting messages in mutt and having to wait 5-7 seconds ***: Tsesarevich_ has joined #arpnetworks
Tsesarevich has quit IRC (Ping timeout: 265 seconds)
qbit[m] has quit IRC (Ping timeout: 245 seconds)
qbit[m] has joined #arpnetworks
mercutio has joined #arpnetworks
ChanServ sets mode: +o mercutio