::sysinit:/etc/init.d/rcS tty1::respawn:/bin/getty -L tty1 115200 vt100 console::askfirst:/bin/sh
But the point is, I got it to work.
Mounting network filesystems in a container doesn't work, because internally the kernel uses the original network namespace rather than the container's network namespace. This means that you can mount using IP addresses and routing visible from the _host_, but not from ones that should be visible from the container.
Wrapping my head around NFS is enough of a roadblock that I'm taking a break to deal with a network filesystem that's crazy in _different_ ways: Samba. It has the same general issues, but it's just one TCP/IP session per mount (modulo reconnecting if that connection breaks), and then everything else it does goes through that connection. No weird TCP vs UDP stuff, no portmap daemon handing out constant information for historical reasons, no layering violations to handle DNS callbacks (or at least much less obvious ones)...
So let's start out by documenting what _works_ here.
So first, I set up samba on the host. Make a new directory "/home/landley/samba" containing a simple smb.conf file:
[global] workgroup = MIDEARTH netbios name = HOBBIT security = user [data] comment = Data path = /home/landley/samba read only = Yes guest ok = Yes
Then I _deleted_ the /etc/samba/smb.conf that Ubuntu installed by default, because even when I tell smbd to read a specific config file with -s it was sucking in the default config file as well and getting really confused. (Maybe this was an smbmount issue rather than smbd, but I made it go away with a big hammer and got on with life.)
sudo smbd -D -s /home/landley/samba/smbd.conf
(Yes, it's exporting the directory containing its own config file, and doing so read only. It's a simple test environment.)
Next, set a samba password for my user account:
sudo smbpasswd -U landley
Doesn't have to be the same as the login password, let's set it to "blah" for now.
Now I can mount the directory from Laptop:
mkdir walrus sudo mount -t cifs //127.0.0.1/data walrus \ -o user=landley,pass=blah
Which means that from kvm, this is equivalent:
sudo mount -t cifs //10.0.2.2/data walrus \ -o user=landley,pass=blah
And it does work.
What doesn't work is using 10.0.2.15 from the KVM context, since that's the KVM's eth0 address (so it doesn't connect to the host's 10.0.2.15 interface we set up in the networking howto last message). That's explicitly set up as a collision occluding that address, so we have a test case that works from the container (since it'll get routed through eth1 and the 192.168.x.x address range and there's nothing special in there about 10.2.x.x, it's goes to the default gateway).
And from _userspace_ in the container, that's true. But from kernel space (I.E. inside the mount)... it uses the KVM level network namespace, not the container network namespace. You can mount 10.0.2.2 in the container (which shouldn't work), and you can't mount 10.0.2.15 (which should).
This is what I'm working on fixing now. Wading through fs/cifs/connect.c looking at bind_socket() and ipv4_connect() and such...