Rob Landley (landley) wrote,
Rob Landley
landley

Setting up a debian container to test NFSv3.

So, last time I got NFS working at the KVM level with mount.nfs but not with busybox, meaning it's not going to work in the busybox-based container for reasons that have nothing to do with the container infrastructure. I could use that container to test NFSv2, but that's obsolete.

I could fix busybox (I already have vague plans to write a p9 server implementation for busybox), but I really don't want to write more NFS support code than I absolutely have to, so let's see about a debian container.

Copy the busybox.conf and rename utsname to "debian", and:

aptitude install debootstrap
lxc-create -f debian.conf -t debian -n debian

And it doesn't work because the wireless was flaky enough to fail to download a couple packages (DNS timeouts I think), and it has no error checking. Wheee. (Dear T-Mobile: Jim Gettys just did an excellent talk on buffer bloat. Look into it, please.)

Try again from a context with more reliable internet... Ok, the base OS installed. Fire it up and connect to lxc-console so we get a tty that isn't in "cooked" mode... and it hasn't got aptitude, vim, or nfs-client, so apt-get install all of that. Plus it's not setting up the virtual network by default, so fix that:

apt-get install aptitude
aptitude install vim lxc-client inetutils-ping
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback

auto eth1
iface eth1 inet static
  address 192.168.254.2
  netmask 255.255.255.0
  gateway 192.168.254.1
EOF
ifup eth1

Ok, at the KVM level this thing lives at /var/lib/lxc/debian/rootfs and if we chroot into that we can mount the nfs. But from the lxc-connect context (in the actual container) it times out.

Switch on the RPC debug infrastructure to spam stuff into dmesg (echo 1 > /proc/sys/sunrpc/rpc_debug), mount the filesystem, ls -l /mnt, and umount it. (There's an nfs_debug in the same directory, but it doesn't seem to do anything.)

Yes, both sets of spam go to the same dmesg, containers haven't got their own. I talked to some of the containers people (They're on #lxcontainers on freenode and have a conference call every other thursday at 9:30 am central, ping Serge Hallyn for an invite) and that's a todo item but turns out to be kind of complicated figuring out what goes into the container log and what goes into the global one.

Anyway, at the KVM level, when it works, we get:

RPC:       created transport ffff88003bd73000 with 16 slots
RPC:    63 reserved req ffff88003c07a000 xid f57fa5ed
RPC:    63 xprt_connect xprt ffff88003bd73000 is not connected
RPC:    63 xprt_cwnd_limited cong = 0 cwnd = 256
RPC:    63 xprt_connect_status: connection established
RPC:    63 xprt_prepare_transmit
RPC:    63 xprt_transmit(40)
RPC:    63 xmit complete
RPC:       cong 256, cwnd was 256, now 512
RPC:    63 xid f57fa5ed complete (24 bytes received)
RPC:    63 release request ffff88003c07a000
RPC:    64 reserved req ffff88003c07a000 xid f67fa5ed
RPC:    64 xprt_prepare_transmit
RPC:    64 xprt_cwnd_limited cong = 0 cwnd = 512
RPC:    64 xprt_transmit(40)
RPC:    64 xmit complete

And at the containers level, when it doesn't work, we get:

RPC:       created transport ffff88003c56e000 with 16 slots
RPC:    67 reserved req ffff88003c07a000 xid 379eaebd
RPC:    67 xprt_connect xprt ffff88003c56e000 is not connected
RPC:    67 xprt_cwnd_limited cong = 0 cwnd = 256
RPC:    67 xprt_connect_status: connection established
RPC:    67 xprt_prepare_transmit
RPC:    67 xprt_transmit(40)
RPC:    67 xmit complete
RPC:    67 xprt_timer
RPC:       cong 256, cwnd was 256, now 256
RPC:    67 xprt_prepare_transmit
RPC:    67 xprt_cwnd_limited cong = 0 cwnd = 256
RPC:    67 xprt_transmit(40)
RPC:    67 xmit complete
RPC:    67 xprt_timer
RPC:       cong 256, cwnd was 256, now 256
RPC:    67 xprt_prepare_transmit
RPC:    67 xprt_cwnd_limited cong = 0 cwnd = 256
RPC:    67 xprt_transmit(40)
RPC:    67 xmit complete
RPC:    67 xprt_timer
RPC:       cong 256, cwnd was 256, now 256

And so on. Note the index numbers not advancing. (63 went to 64 but 67 remains 67.) Instead xprt_timer is triggering on the very first RPC transaction, which is getting retransmitted, ad nauseum, until I hit ctrl-c on the mount.

Unfortunately, this doesn't really tell me anything, except "we tried to talk to the server and it didn't respond". The _sad_ part is I'm testing on an address that both the host and target should be able to access, albeit through two different channels. If it picked _either_ network context, it should work, but it's not. (Confused network stack is confused.)

Tags: documentation, dullboy
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments