***: erratic has joined #arpnetworks
erratic has quit IRC (Excess Flood)
erratic has joined #arpnetworks
perlgod has left "WeeChat 1.9.1"
erratic has quit IRC (Excess Flood)
erratic has joined #arpnetworks
ziyourenxiang has joined #arpnetworks
mloza has joined #arpnetworks
erratic has quit IRC (Excess Flood)
erratic has joined #arpnetworks
ziyourenxiang has quit IRC (Ping timeout: 248 seconds)
erratic has quit IRC (Excess Flood)
erratic has joined #arpnetworks
erratic has quit IRC (Excess Flood)
qbit: so - for the dedicated boxes
i am hitting an issue with clocks on openbsd
brycec: Clocks as in... CPU? Or RTC? qbit
(Not that I have answers, I just help get details/triage)
qbit: RTC
***: erratic has joined #arpnetworks
brycec: What's it doing? Drifting? Lagging? And to confirm, OpenBSD is the host OS on the dedi box?
qbit: basically this: https://marc.info/?l=openbsd-bugs&m=151430928212450&w=2
the preemption_timer
i am wondering if it would be possible to get kvm-intel.preemption_timer=0 on what ever host has my dedi stuffs
brycec: What exactly is that? Some OpenBSD config() thing? A Linux string?
If you've got a dedi, you've got baremetal access, you can do whatever...
qbit: kernel boot option
brycec: Or have you been talking about a Thunder instance?
qbit: thunder
i call them dedi because : https://arpnetworks.com/dedicated
"dedicated"
brycec: "dedicated resources" but yeah, it leads to confusion IMO
So to clarify, you're asking if you could arrange for the Thunder host machine could be rebooted with a kernel parameter?
qbit: guess i am asking about the posibility of doing that
i understand that likely there are more clients.. so if it isn't an option - i will have to figure something else out
brycec: Seems like a tall ask :p but that's my opinion. And like I said, I'm just trying to distill the problem/request to help everyone out. :)
mercutio: does this effect normal vps instances too?
it seems like quite a major openbsd bug
brycec: *affect :D
mercutio: yes :)
brycec: qbit: What was your `sysctl kern.timecounter` anyways?
qbit: seems to be anything in kvm
brycec: acpitimer0
mercutio: i think the only major change to openbsd in the recent history has been the addition of virtio random device
brycec: So not exactly the same as the mailing list guy's
qbit: mercutio: negative - i am running -current
this is a grip of changes from 6.2
mercutio: was it fine prior to -current
qbit: seems to have been fine prior to the meltdown stuff
mercutio: sigh
yeah so it probably will start happening on all virtual machine instances unless they fix it
starting with the next release
"FWIW, there are reports that this bug is absent from qemu-2.11.0.
hmm
i thought it was a kvm bug rather than qemu bug
qbit: https://marc.info/?l=openbsd-bugs&m=151439799628499&w=2
oh, you are probably already past that :P
mercutio: well it's mostly about kvm_intel.preemption_timer being disabled apparently helping
that made me think it was kvm issue
uh, i don't think preemption_timer exists in the linux version being used
qbit: heh
it's a boot option (i put mine in grub)
mercutio: it's not in /sys/module
but it's in /sys/module on newer kernel
qbit: dang
mercutio: it's linux 4.4
qbit: will /sys/module always reflect avaiable moduels? or maybe loaded?
ah
mercutio: not always
qbit: Not Always - Linux
mercutio: but if it's not supported in there you can definitely not change it without reboot
qbit: dang
mercutio: oh you can't change it while booted anyway
qbit: double dang
BryceBot: That's what she said!!
qbit: if you say so BryceBot
mercutio: https://www.spinics.net/lists/kvm/msg161757.html
this suggests one way that may help a specific instance
qbit: this is quite easy to reproduce right?
qbit: step 1) install openbsd
mercutio: snapshot?
qbit: ya
https://ftp3.usa.openbsd.org/pub/OpenBSD/snapshots/amd64/
that's all - no more steps :D
clock will drift even with ntp going
mercutio: hmm install62.fs is preinstalled? :)
i can't remember seeing that before
brycec: What do you mean "is preinstalled"? It's a dd'able version of the ISO, but still just the install media (bsd.rd)
qbit: it's bsd.rd
mercutio: oh
damn
qbit: with sets
brycec: qbit: Have you tested to see whether it can be reproduced from bsd.rd alone?
Or does it take the full bsd kernel config?
(I know that bsd.rd omits a lot of ACPI stuff to keep its size down, among reasons)
qbit: i don't know if bsd.rd is enough to make it happen
yeah, i doubt it would work
brycec: Also, I would've pointed mercutio to http://mirrors.arpnetworks.com/OpenBSD/snapshots/ :P
qbit: s/work/be a good test case/
BryceBot: <brycec> Also, I would've pointed mercutio to http://mirrors.arpnetbe a good test cases.com/OpenBSD/snapshots/ :P
qbit: :D
mercutio: BryceBot: that's where i'l pulling it from :)
oh oops :)
brycec: lol qbit
qbit: Feb 20 13:03:47 spark ntpd[5226]: adjusting local clock by 18.013269s
Feb 20 13:08:07 spark ntpd[5226]: adjusting local clock by 22.688790s
mercutio: damn
up_the_irons: I agree with mercutio , it seems like quite a major openbsd bug
brycec: up_the_irons: For reference, what version of Linux/kvm/qemu are Thunder machines running?
qbit: And just to be clear, *you* have only seen this behaviour with recent snapshots? Older snapshots were fine, as were older releases?
up_the_irons: 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
ii qemu-kvm 1:2.5+dfsg-5ubuntu10.16 amd64 QEMU Full virtualization
qbit: brycec: if it was in previous snapshots - it wasn't enough ofa delay for me to notice
ha
brycec: I'm going to fire up a -current machine on a Proxmox machine, 4.4.62-1-pve + qemu 2.9.1-6~pve4 and see what happens...
qbit: i am not very observant
# date
Tue Feb 20 13:21:04 MST 2018
#
mercutio: mine just finished installing
qbit: that's 6.2
mercutio: i like how openbsd installs quite quick
qbit: Feb 20 13:19:22 db ntpd[2811]: adjusting local clock by 207.116095s
brycec: qbit: So you're saying 6.2 is also affected, cool (I have 6.2 VMs already in Proxmox :p)
qbit: ya
mercutio: hangon my vm is using acpihept0
will that change things?
***: erratic has quit IRC (Excess Flood)
brycec: For the most part, my VMs stay just fine, but sometimes there are wild swings in drift, I'm guessing caused by load spikes in other VMs on the host. http://ix.io/N8h
By comparison, a 6.0 VM running on the same host has almost no time swing
mercutio: brycec: is it using acpihept0?
my time still seems right with -current heh
maybe i need to trigger it with load?
brycec: Yes, both my 6.0 and 6.2 VM on the same Proxmox host are using acpihpet0. Only the 6.2 VM sees random drift. (Seeing it on Proxmox means it's not just an ARP issue :) And it's what I have to play with readily available.)
mercutio: (i'm testing on my desktop first)
my desktop has linux 4.15, and some recent qemu
qbit: yeah - this has been confirmed on vultr too
mercutio: i'm kind of curious if they've fixed it in linux or osmething
qemu 2.11.1
brycec: 6.0 http://ix.io/N8s 6.2 http://ix.io/N8t Same exact kern.timecounter values.
mercutio: ahh cool
that's curious,y ou got a lot more adjustments sicne the 13th
brycec: "Something happened" on the 13th it seems.
mercutio: to both of them
brycec: And re-evaluating my output, I guess 6.0 also saw some lagging afterall.
mercutio: this doesn't seem extreme though
what qbit was talking about seemed more extreme
brycec: I disagree, 10s+ adjustments are extreme :p
mercutio: i mean it's not good
but wasn't there like minutes of drift
like a lot bigger difference
brycec: But these 2 particular VMs of mine are real small stuff, not doing any major load (they're effectively bastion hosts). I suspect with heavier load, it exacerbates the issue.
mercutio: yeah it may
qbit: it's def worse on machines with high load
mercutio: i need to compile something :)
brycec: So I guess I'm back to: a) Doesn't seem to matter which OpenBSD release, even 6.0 is affected to a degree, albeit perhaps lesser. b) Even newer kernel+kvm (than ARP, but not "newest") are affected.
anisfarhana: You guys are crazy.
qbit: likely the meltdown stuff in current snaps is gonna make it worse too
***: erratic has joined #arpnetworks
mercutio: i just realised something i'm not testing with smp
is this only affecting smp hosts, or in general?
s/hosts/guests/
BryceBot: <mercutio> is this only affecting smp guests, or in general?
qbit: mine are all single cpu
mercutio: ah ok
brycec: Ditto for me (I'm pretty sure)
mercutio: i thought was on thunder
qbit: it is - but mine is proxmox -> openbsd vms
mercutio: ah ok
well i did a kernel compile, and it failed to compile...
maybe -current has broken compiling 6.2
time is still right
so yeh i think either newer kernel or newer kvm has fixed it
***: erratic has quit IRC (Excess Flood)
qbit: huh
well that's cool
it seems to sync back up once load has gone down
***: erratic has joined #arpnetworks
erratic has quit IRC (Excess Flood)
erratic has joined #arpnetworks
mercutio: qbit: what kernel is promox running under?
i'm thinking that the underlying thunder thing may make less difference than the proxmox layer..
qbit: mercutio: 4.4.98-6-pve
mercutio: ahh so similar to ubuntu kernel
oh maybe proxmox doesn't haev a recent kernel available to try
The current stable 4.x release uses latest Ubuntu based kernel, which will be regularly updated. The first stable 4.0 release is based on 4.2 Linux kernel.
it's actually the same kernel even :)
qbit: heh
***: erratic has quit IRC (Ping timeout: 260 seconds)
erratic has joined #arpnetworks
fIorz_ has joined #arpnetworks
fIorz has quit IRC (Remote host closed the connection)