Solaris/Zones
< Solaris
Zones for Solaris 10
$ zfs list NAME USED AVAIL REFER MOUNTPOINT tank0 11.1G 22.2G 808M /data tank0/zone-0 803M 1.22G 803M /data/zone-0 $ zfs create tank0/zone-1 $ zfs set quota=2G tank0/zone-1 $ zfs list NAME USED AVAIL REFER MOUNTPOINT tank0 11.1G 22.2G 808M /data tank0/zone-0 803M 1.22G 803M /data/zone-0 tank0/zone-1 18K 2.00G 18K /data/zone-1 $ echo "zone-1:100:Solaris Zone:root,bob::rcap.max-rss=268435456" >> /etc/project $ tail -2 /etc/project zone-0:100:Solaris Zone:root,alice::rcap.max-rss=268435456 zone-1:100:Solaris Zone:root,bob::rcap.max-rss=268435456 $ zonecfg -z zone-1 zone-1: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zone-1> create zonecfg:zone-1> set zonepath=/data/zone-1 zonecfg:zone-1> set zonename=zone-1 zonecfg:zone-1> add net zonecfg:zone-1:net> set physical=hme0 zonecfg:zone-1:net> set address=10.200.0.126 zonecfg:zone-1:net> set defrouter=10.200.0.1 zonecfg:zone-1:net> end zonecfg:zone-1> add capped-memory zonecfg:zone-1:capped-memory> set physical=64m zonecfg:zone-1:capped-memory> set swap=128m zonecfg:zone-1:capped-memory> set locked=32m zonecfg:zone-1:capped-memory> end zonecfg:zone-1> verify zonecfg:zone-1> commit Mzonecfg:zone-1> ^D
The verify does not seem to work, as actually installing (cloning) the zone fails:
$ zoneadm -z zone-1 clone zone-0 /data/zone-1 must not be group readable. /data/zone-1 must not be group executable. /data/zone-1 must not be world readable. /data/zone-1 must not be world executable. could not verify zonepath /data/zone-1 because of the above errors. zoneadm: zone zone-1 failed to verify
$ chmod 0700 /data/zone-1/ $ zoneadm -z zone-1 clone zone-0 zoneadm: zone 'zone-1': clone operation is invalid for running zones.
Oh, right, we have to halt the zone we want to clone from first:
$ zoneadm -z zone-0 halt $ time zoneadm -z zone-1 clone zone-0 Cloning zonepath /data/zone-0... grep: can't open /a/etc/dumpadm.conf real 4m22.398s user 0m4.331s sys 1m7.735s
Hm, strange error message, but the new zone seems to work:
$ time zoneadm -z zone-1 boot real 0m10.801s user 0m0.145s sys 0m0.083s $ zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared 7 zone-1 running /data/zone-1 native shared - zone-0 installed /data/zone-0 native shared $ zlogin zone-1 [Connected to zone 'zone-1' pts/3] Last login: Sun Jan 4 18:23:19 on pts/3 Sun Microsystems Inc. SunOS 5.10 Generic January 2005 zone-1# uname -a SunOS zone-1 5.10 Generic_137137-09 sun4u sparc SUNW,Ultra-250 zone-1# uptime 7:05pm up 1 user, load average: 0.96, 0.29, 0.24
TODO
- RTFM on the disklayout again, currently we have:
global# zfs list tank0/zone-1 NAME USED AVAIL REFER MOUNTPOINT tank0/zone-1 802M 1.22G 802M /data/zone-1
zone-1# df -h Filesystem size used avail capacity Mounted on / 0K 802M 1.2G 40% / /dev 2.0G 802M 1.2G 40% /dev /lib 16G 5.7G 9.9G 37% /lib /platform 16G 5.7G 9.9G 37% /platform /sbin 16G 5.7G 9.9G 37% /sbin /usr 16G 5.7G 9.9G 37% /usr zone-1# mount -v | grep /usr /usr on /usr type lofs read-only/setuid/nodevices/nosub/dev=154000a on Sun Jan 4 19:04:43 2009
zone-1# pkginfo | wc -l 1226
- RTFM on resource_controls(5), especially what happened to capped-memory and rcap.max-rss?
zone-1# /usr/local/bin/top | grep Memory Memory: 768M phys mem, 43M free mem, 960M total swap, 915M free swap
- Although the new zone is responding to ping(1), networking feels kinda b0rked:
zone-1# netstat -ni Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue lo0 8232 127.0.0.0 127.0.0.1 288 0 288 0 0 0 hme0 1500 10.200.0.0 10.200.0.126 943910 10 703980 0 28737 0 <---- these are the GLOBAL packets! zone-1# netstat -ni <empty>
BrandZ
Solaris 8
Solaris 9
Solaris 10
RHEL
tbd...