its ok. just download and execute it mnathani: Sorry to say I have no idea. up_the_irons: and other metalheads, some free (and discounted) Mastodon https://play.google.com/store/music/collection/promotion_200041d_vid_mastodon yay mastodon ughhhhhh Can you be more specific? what did I miss? :) How long were you gone? (I have quits hidden... just know that it's been awhile since you last spoke) the logs should know or not well, the logs for today are quite sparse anyways The logs would, but then I'd have to go to the effort of digging through logs And I'm already busy with today's stumper OpenVZ container with v4 and v6 addresses (venet). Container can ping6 wherever it wants, it can "curl -6" wherever it wants, in short ipv6 seems to work fine. Client machines (same subnet) can ping6 the container, but can't make http or svn requests. tcpdump on the client shows only the request to the container, but the container and its host show two-way traffic [attempts] But I think I found the culprit in tcpdump's -e -- the MAC on the return traffic is wrong It's trying to return the request to 00:00:5e:00:01:14 which is a VRRP special MAC Soooo my NDP table somehow got b0rked wtf happens with openvz yeah I know v6 in vz is a bit buggy dealt with a similar issues yesterday at $work s/a// delt with similr issues yesterdy t $work well that was bad regex what a coincidence Was there a solution? Or just "well that's openvz" brycec: using solusvm at work, just removed and readded the address i think it was an issue with openvz + provider's switches hard to debug Gives me something at least, thanks hi brycebot Hello to you too, hazardous Update: Both containers with v6 addresses on that host have the issue. AND a quick little python web server running on the host itself isn't accessible remotely (but is via ::1. and yes it's listening on :: and there are no iptables active) So, it's just the host being a dick And since venet is routed, that dickishness is passed along VICTORY IS MINE Now to figure out why exactly this worked "ip -6 route add 2001:400::/24 dev vmbr0" :o I mean part of it is obvious (I found that route on 1 of my 4 Proxmox hosts. I'm not sure how it got there.) And I'm not sure why that /24 specifically Probably because that's the upstream /24 of my /64 Still no idea where that came from, and why it's only on 1/4 hosts [sorry for the flood, #arpnetworks] ^^ Answer: A typo in /etc/network/interfaces, "netmask 24" on the inet6 definition It's so nice to see I'm not alone in this minor nightmare http://blog.endpoint.com/2013/07/proxmox-and-fun-maze-of-ipv6.html In ARP news, ipv6 Just Works here :D Welp, my woes are resolved. Turned out to be routing issues all along. why are you using openvz bryce because I'm using Proxmox, and VZ is much, much lighter than a full-blown KVM instance oh yip