Suggestions for Setup of the Computer Lab
Robert
Pogson
The lab is a useful facility but it has a number of problems:
ergonomics – seating, positioning of users and screens, comfort and accessibility of equipment are far from ideal.
hardware – some of the older machines have rather small memory and disc storage and they are slow.
software – all of the machines are running some variety of MacOS with a layer to protect the system from users. Systems crash, freeze and refuse to print with no diagnostic messages. Documents may print hours or days later. WWW connection is sometimes interrupted or slow. DNS lookup can take many seconds.
Ergonomics
We have users of all shapes and sizes but only one or two types of chair. It would be advisable to obtain chairs adjustable in positioning and with cushioning. The room is often warm and the plastic chairs become clammy.
The adjustable keyboard trays are an attempt to make the keyboard position more suitable. The mouse, however, is now too far away and the user must reach for it. The trays have flimsey structures and are failing. The ones without working clamps allow the keyboards to collide with the counter top and to be damaged. I recommend extending the counters with a ledge of similar material, or even a plank or sheet of plywood to make a large enough platform for work, keyboard, mouse, and computer.
The Apple computers, that come in a single unit, often do not allow the screen to be positioned to give a normal view to the user. This is important for comfort, stress relief and elimination of glare in the view. I recommend placing blocks under the current computers to raise the screens. In the future, I recommend obtaining separate, larger monitors and computers so that the position of the screen is less compromised.
The machines release kilowatts of heat energy into the room. When it is warm, users open the vent in the window, causing a cold draft which is uncomfortable. The Tektronix Phaser 850 color printer is near the window and is too cold in cold weather for normal operation due to drafts from the window. This printer should be relocated to a location with a more constant temperature. The printer goes into “warm-up” every few minutes and is not available for printing at that time. Some more responsive ventilation system would be desirable and may save energy. Could not cool air from other parts of the building be brought in?
Hardware
The older machines have barely enough RAM to run the modern software with graphical user interface (GUI). The old powermacs have 180 MHz processors, and 1.2 gB hard drives. The old iMacs have 400 MHz processors and 64 mB RAM. These have larger drives. Memory and hard drives are quite reasonable priced these days. We could add 256 mB of RAM and 40 gB of hard drive for about $150 for each machine. This would reduce crashing with the present software and permit a more modern operating system like Linux to thrive. This would be a short term solution. In the long run, these machines should be replaced with IBM compatible PCs with ATX motherboard, DDR memory and a good, fast processor such as the AMD Athlon. New systems, close to state of the art, with 256mB RAM, 17 inch monitors and a 2000 MHz processor can be had for less than $1000. These could be maintained in-house with a Philips screwdriver and a few spare parts.
I recommend the older machines be refurbished with larger hard drives and additional memory. They would then be able to serve well enough in the lab or classroom. As they are, there are some uses for which no changes would be needed:
one or two of them could address the printing problem by being set up as a print server and backup. By having only one print queue to manage, the problem of stale matter being printed would be gone. With a minimal MacOS partition for booting, and a minimal (GUI-less) Linux installation, the hard drive still has about 500 mB of free space. This is quite adequate for printing queues. In the event more space is needed, the two servers or other Linux systems can work together, combining disc storage over the network. Linux is a high-availability system so there would be fewer printing problems even without changing the software on other computers in the lab. It is also trivial to have the print queues cleared of stale jobs. Linux usually clears any job that times-out automatically.
When surfing, a user consults Domain Name Servers (DNS) many times in a session. I have measured delays of up to 8s when finding the IP address of a URL on the WWW. It would be very helpful to use a DNS in the lab instead of going out to the WWW to find IP addresses of websites. If an address on the WWW is accessed in the last day or so, a DNS server in the room can store it and deliver it within milliseconds the next time it is needed. Tests show the average lookup time decreases from about 300 milliseconds to about 30 milliseconds when the system is not fast and when the system is slow, the average lookup time rises to one second while the local DNS takes no more time. It takes five minutes to install a DNS server on a Linux machine and to share it with the lab. One of the old machines could provide this service.
We use fixed network addresses in the lab. This can be very inconvenient when adding a machine to the network, or when setting up a machine. In many cases, one has to type in IP addresses, and DNS addresses when setting up a machine. The old machines could provide a service, called Dynamic Host Configuration Protocol (DHCP) which gives this information to a machine when it boots up. If all addresses were assigned dynamically, we would have to dynamically update the DNS, but a compromise can be done by having servers and printers with fixed IP addresses and all others, dynamic. If two computers on the network need to share information they could use commonly accessible directories on the servers. We could have read/write directories for each group, one for all groups, and similarly for read-only directories. We can give teachers write access to these shared directories.
Every Mac in the lab shows a different time of day, except the Linux test machines which have the same time right to the second using ntp. The old machines can serve as Network Time Protocol (NTP) servers to synchronize the clocks. They could check precision clocks on the WWW and relay that information to the lab. Very little network traffic is involved and it takes only a few minutes to correctly set all the clocks in the lab. If a computer is off for a long period of time, its clock may need to be set manually. Correct time is not only a luxury, it is very useful. The manager of the lab can set up backups to run at off hours. If the clocks are wrong, a backup might start in the middle of a class. Students will know when to finish up work without prompting if they know the clocks are correct. Teachers can also schedule actions to take place on the network on a schedule determined in advance.
Many users visit the same websites every day. Http://hotmail.com and so on are famous. The old computers can run webservers to make available cached web content so that the same start page does not need to be downloaded over the limited bandwidth of the WWW connection each time. Our connection appears to be capable of 200 kB/s but is throttled to 45 kB/s per user for fairness. The old computers can deliver up to 800 kb/s on a 10 megabit/s network, so we could get at least an increase of four times in page loading. I have done tests which show speedups of over 30 times. Teachers could use this capability to download web content at off hours and point students to it during classes. Instead of taking 30 s to load each screen from a slow static site, we could load all screens in 30s or less. The 500 mB freespace on the old drives would be a limiting factor. Using networked drives would be several times slower.
We use 10 megabits/s networking equipment. We could change to 100 mb/s for about $15 per NIC. If the hubs/switches cannot handle 100 mb/s they could be replaced for about $10 per connection. Another solution is to use multiple Network Interface Cards (NIC) to connect machines. An IBM compatible PC can have one connection on the motherboard and five or more in the slots of the motherboard. Thus, the server can connect to five PCs that can connect to five PCs for a total of 31 machines networked. That is about the size of the lab. A disadvantage of this method is that the cables may need to be changed from straight-through to crossover and the first six machines would need to be on for the network to function. Macs do not have this high fan-out capability so it would not be applicable without a clean sweep. A big advantage of this network configuration is much higher bandwidth. We would have 500 mb/s out of our server instead of 10! The lights might blink if everyone pushes “return” at the same time... This is near the limit of what our oldest machines can do which gives an idea of how far from ideal our setup is. On a busy netowork, collisions of packets reduces throughput. Using this fan-out arrangement greatly reduces collisions because there are multiple data paths carrying the same volume. Actual tests of downloads from local servers: 200KB/s from an old Mac running Linux to a new Mac running MacOS and 800KB/s from an old Mac running Linux to a new Mac running Linux. In my home we had an old 233 MHz PC running a 100 mb/s LAN using Linux carrying 6.4MB/s with about 7% cpu utilization. A 100 mb/s LAN can pretty well keep up with a hard drive so a server delivering this kind of bandwidth would need a lot of RAM and/or multiple hard drives to hold the most popular cached content. This is not a problem for a well-equipped PC.
We expect to receive 10 more modern IBM-compatible PCs shortly. These machines will likely be supplied with less than speedy processors and newly-installed Microsoft Windows software. Such software has most of the problems we have with MacOS and should not be used. I recommend installing Linux if the hardware is compatible. Linux is much faster for networking and moving data and very good software is available cheaply for everything except the typing tutors. There are acceptable programmes available. They are just not as fancy. Linux is growing rapidly. In a year or two, the best typing tutors will be available on Linux as more schools switch to Linux. If the IBM machines are full sized, they could be used for the server functions described previously and still function as general purpose machines.. The server functions would likely use less than 10% of cpu power because the modern cpu is more than 1000 MHz in speed. The IBM PCs could do the network fan-out and the Macs could be at the end of the lines because they usually have only a single NIC. A big advantage of this fan-out is that we could use this server power to install Linux on the whole lab over the network. It takes about an hour to install a full version of Linux from a CD and 15 minutes from a hard drive or a fast network. The network installation could do several machines at one time.
In the future, I recommend obtaining machines with the following specification:ATX motherboard with 10/100 connection (with RAID for servers) , mid-tower case, optical mouse, mid-range keyboard, 17inch flat monitor, AMD cpu a few steps back from state of the art (2000 as opposed to 3000 MHz speed rating) (much better performance/price than Intel chips or the latest and greatest chips), 256 mB RAM DDR 400 MHz, 40 gB HD 7200 rpm ,{ for servers: 1.5 gB RAM and several 120 gB 7200 HD in software RAID, 5 100 mb/s NIC and full-tower case (to hold more drives)}. Such machines are flexible in configuration and can be maintained by replacing parts for the forseeable future using only a screwdriver.
There are two important points that need to be added to the picture above: routing and backing up the server. All PCs between the server and the outlying computers would need to be configured for “connection sharing”, relaying packets between others and the server. Setting this up is a few lines of configuration text for iptables and route, two programmes that do these things very efficiently. To back up the server automatically in the above configuration is difficult as failure of the server brings down the whole network. The simplest solution is to have one of the outlying PCs fully equipped with five NICs and bearing two configurations, normal, and as a spare server. Upon failure of the server, this special computer would receive the six connecting network cables going to the server and reconfigured to the new role by issuing a command by the adminstrator. This machine would need to be in proximity to the normal server to minimize difficulty with this changeover. With Linux, one would not even need to reboot because the OS is not changed, only a few text files and the set of programmes running in the background. The set of users and passwords would be automatically up to date but the directory structure might be a few hours out of date. As part of the changeover, the /home directory would need to be remounted and this computer would need to have the necessary disc capacity. When the system activity is reduced during the day, this backup computer could have a process running to make a mirror of the original server's data. For example, in a five-minute change of class, an hour's worth of changes could be copied. Changeover would involve switching configuration and service files in runlevel 1.
Software
I have already mentioned the advantages of using the Linux operating system in the lab. It is more reliable and faster once it is installed and configured than MacOS or Windows. Apple has recognized the deficiencies of MacOS when it came out with MacOSX. That latest MacOS comes from a completely different code base derived from the UNIX operating system. UNIX was developed over several decades by ATT and until recently formed the software of most computers serving web content on the WWW. UNIX is a true multi-user system that can have hundreds of people running hundreds of programmes all at the same time on the same machine with no problem except enough computing power and resources. Resources are plentiful now. The only reason everyone does not use UNIX is that it is a proprietary system with a restrictive licensing scheme. You can usually run only one system in one place with the licence that may cost hundreds of dollars. In the presence of this, Apple and Microsoft began developing their own operating systems from scratch. Apple and Microsoft both followed a similar restrictive licencing scheme using the business model that the software was owned by the companies and a lot of money could be made because they were the only game in town for the ordinary user. So millions of users paid $150 dollars for a licence for an OS and a similar amount for software like WORD or APPLEWORKS. Both company's business grew into monopolies for the Intel based IBM-compatible PC and the Apple computers with various processors starting with the Motorola68000 and later partnerships between Apple and IBM.
At first, the amazing capability of the PC hid a great flaw in this software: it did not work. Businesses wanted to sue for damages caused by restrictive trade practices and faulty software. The licences said there were no guarantees and both companies did things to pressure manufacturers to use only their software. Eventually, Apple built in facility to use other software on their machines and Microsoft was convicted of monopolistic trade practices. In any case, if people were forced to buy cars with only one kind of engine, another kind would eventually be developed and it was. Linus Torvalds recognized that he could write an operating system programme that would operate more or less the reliable way Unix did and he could distribute it under an open licence. His operating system was called Linux and thousands of people were so overjoyed to participate in this project that the software which is visible to everyone, rapidly improved in quality and every bug was addressed and every useful feature was included. The licence is free and the terms are that the software may be copied, given, used on any number of machines and modified as long as the coding remained open, visible and included with distribution of the software. Today there are a hundred distributions of Linux and many excellent programmes available. This Linux software is ideal for our school because it is inexpensive and reliable. The Linux stations I have installed do not crash or freeze. A few programmes have, but that is because most programmes are in rapid development and this is expected. Most programmes on Linux are as much or more reliable than those that come with other operating systems. My Linux boxes have only failed to print once when I had left the default printer set to a non-working machine. Compare that with the others that fail daily. We do not have a large investment in software to hold us in servitude to the commercial companies, and it is of benefit to our students to be knowledgable about a growing movement in computing.
While the software may be free, there is some retraining needed. Fortunately most documentation for Linux is also free:
tldp.org Formal documentation of Linux and its components
linuxiso.org Where to download Linux and to discuss with others
distrowatch.com Helps compare one distribution of Linux with another
linuxmandrake.com My recommendation for distributor.
Redhat.com Distribution on which mandrake is based. Good Documentation.
openoffice.org OpenOffice website. Descended from StarOffice a competitor of MS Word from Sun Microsystems.
Www.opera.com A closed but free web-browser, opera is the best for older machines. It has neat shortcuts and a full-screen mode.
The best way to learn about Linux is to use it, not to read about it. My students have found it takes an hour or so to be comfortable with it, and then they do not notice any difference except the reliability. To adminster and set up the system requires much more knowledge. I am leaving in a little over a month which is plenty of time to install the new machines and to teach whoever wants to learn how to do that and to maintain the sytem.
The major difference between Linux and the other OS is that Linux has the networking features built-in. Unix was invented to run on something to network. The artificial layers placed on MacOS which is not a true multiuser system just makes the unreliability of MacOS worse. Linux shines in the role. The beauty of Linux, is that one can imagine dozens of configurations of the software to suit our purposes. Whatever we want Linux to do, it can.
The major role of our software is to allow anyone to login anywhere and to access files owned by that one. Teachers may want to group students and to share files with students in the group either read-only or read-write. We absolutely do not want others to be able to access files to which they are not authorized. This is the default behaviour of Linux and requires little effort to do.
Create directories for users and groups on the server (and a backup).
Share the encrypted passwords with each computer in the system. This can be done by periodic copying or continuous network access.
When a user logs in anywhere, he supplies a clear password which is encrypted and compared with the already encrypted password. A match gives him access to his files on the server. These files are usually small typed documents and of no consequence to the network. They can be shared transparently by Network File System (NFS), or we can use Secure Shell (SSH) to login to the server. SSH is like using a debit machine in reliability (everything passed along the network is encrypted) but it means more work for the server as all programmes would run on it unless users login on their machines, do a Secure Copy (SCP), work locally and write back to the server to save. We can set an icon to point to a save or logout script that would take care of this. This would be the easiest secure method to set up as it would give easy access to the GUI. Running the GUI over the network would slow it down. Running programmes on the server is no problem with fast processors and Linux... and you can always add additional servers to expand. Did you notice there are three methods we can use with Linux, right out of the box? It would take less than an hour to set up a server to work this setup and the clients could be copies of a similar installation with no effort after the first one. Maintaining the passwords would take more time unless we used a single password per group as now.
The simplest method to configure would be to use NFS to mount the users' directories on each client computer. That is a single line of text in a configuration file in each computer. It would rely on strong passwords to protect files from improper use. We can give each user their own password initially and change it periodically or as needed. I have written a simple computer programme which reads a list of names and generates passwords and encrypts the passwords in the form reqired by Linux. People could change their passwords to whatever they like except we would reqire them to be at least six characters in “a-z”, “A-Z”, “0-9” and “./”. Linux can enforce rules against using “password” or other simple words as passwords.
A simple programme that enters users automatically into the system (instead of doing so manually) follows. All it requires is a list of groups and users. It generates random passwords to be issued.
program mkusers;
(*This is a programme written in PASCAL (because I do not know C) to prepare a script of commands
to install a list of users on a Linux system. The programme reads a file of usernames, generates
random passwords and outputs the script that can create those accounts and sends a list of names
and passwords to stdout where they may be redirected. Suggested usage: given a file A having group A users, and a file B having group B users, the following commands will enter all users on the Linux
system and create their home directories:
mkusers A A add.sh > text
mkusers B B add.sh >> text
./add.sh
lp text
This programme is similar to the newusers batch user creation programme.*)
{$LINKLIB crypt}
function crypt(s1:pchar;s2:pchar):pchar;cdecl;external;
var user,group,a,b:string;
f,g:text;
function escape(s:pchar):pchar;
(*places \ in front of $ so that bash will not interpret them *)
var j,k:integer;st:string;
result:pchar;
begin
result:=@st[1];
j:=0;k:=0;
while s[j] <> chr(0) do
begin
if s[j]='$' then begin result[k]:='\';inc(k) end;
result[k]:=s[j];
inc(j);inc(k)
end;
result[k]:=chr(0);
escape:=result
end;
function random_string(length:integer):string;
(*generates a random string of length length, suitable for use in passwords *)
var s:string;
begin
s:='';
while length>0 do begin s[length]:=chr(random(72)+ord('0'));dec(length);inc(s[0]) end;
random_string:=s
end;
begin
randomize;
if paramcount<3 then writeln('usage: mkusers group file_of_users script_out')
else
begin
writeln('attempting to open ',paramstr(2));
assign(f,paramstr(2));reset(f);(*$I-*)
if ioresult <> 0 then begin writeln(paramstr(2),' does not exist. aborting');exit end;
group:=paramstr(1);
writeln('group=',group);
assign(g,paramstr(3));
reset(g);
if ioresult=0 then
(*append or create output file if it does not exist*)
begin writeln(paramstr(3),' exists. appending');close(g);append(g) end
else begin writeln(paramstr(3), ' not found. creating ',paramstr(3));rewrite(g);writeln(g,'#!/bin/bash') end;
(*position of output file at end of data*)
while not eof(f) do
(*Read users from the file_of_users *)
begin
readln(f,user);
(*write the command for Linux to add one user*)
a:='$1$'+random_string(7)+'$'+chr(0);
b:=random_string(6)+chr(0);
writeln(user,space(14-length(user)),group,space(14-length(group)),b);
(*generate commands like useradd -g group -n -p password user *)
writeln(g,'useradd -g ',group,' -n ',user,' -p ',escape(crypt(@b[1],@a[1])))
end;
close(f);close(g)
end
end.
The advice given to users of Linux by RedHat is
Remember the following two principles
Protect your password
Don't write down your password - memorize it. In particular, don't write it down and leave it anywhere, and don't place it in an unencrypted file! Use unrelated passwords for systems controlled by different organizations. Don't give or share your password, in particular to someone claiming to be from computer support or a vendor. Don't let anyone watch you enter your password. Don't enter your password to a computer you don't trust. Use the password for a limited time and change it periodically.
Choose a hard-to-guess passord
passwd (the password changing routine) will try to prevent you from choosing a really bad password, but it isn't foolproof; create your password wisely. Don't use something you'd find in a dictionary (in any language or jargon). Don't use a name (including that of a spouse, parent, child, pet, fantasy character, famous person, and location) or any variation of your personal or account name. Don't use accessible information about you (such as your phone number, license plate, or social security number) or your environment. Don't use a birthday or a simple pattern (such as backwards, followed by a digit, or preceded by a digit. Instead, use a mixture of upper and lower case letters, as well as digits or punctuation. When choosing a new password, make sure it's unrelated to any previous password. Use long passwords (say 8 characters long). You might use a word pair with punctuation inserted, a passphrase (an understandable sequence of words), or the first letter of each word in a passphrase.
We can allow or prohibit individual users changing their passwords. If we allow users to change their passwords, they must login to the server and change the password there:
#!/bin/sh
#An icon on the desktop can be clicked to run this script
#user will be asked for present password twice. Once to connect with server #and again to change password.
ssh server
passwd
exit
To prevent accidental use of passwd on the client we can remove execute permission for all. The old login will remain valid as long as the user is logged in. The next time they login, the new password will be required. We may have to run rsync or some other programme to synchronize files.
We can use SCP or SSH to keep copies of the encrypted passwords on clients' computers up to date.
On the Server On the Clients
Run nfsd at boot run nfs at booting
In /etc/exports In /etc/fstab
/home 195.195.195.0/24(rw) 195.195.195.1:/home /home nfs auto,_netdev,rsize=8192,wsize= . 8192,nouser
/etc/passwd /etc/shadow /etc/group secure copies of these files
This simple setup will do exactly what the present system is supposed to do but does not. Because we cannot get the server working properly with MacOS, we are suffering. Linux, being open source and well documented will be entirely under our control. We are not limited by a contract or license in the operation of this software.
There is still another solution in Linux, NIS, Network Information System. NIS runs as a server keeping a database of users, passwords and much other information about the network and as programmes on each client to permit login at each client computer. A single file, /etc/nsswitch informs the client computer to check NIS for the network user names and a local file /etc/shadow for local user names. NIS is designed to be distributed over several computers so that failure of one permits the job of keeping the database to continue. One NIS server is a master and can periodically or as changes happen, update the slave NIS servers.
The software to install on all Macs and IBM compatibles has already been downloaded from the www. We have on hand:
A set of 3 image files of Mandrake 9.1 for IBM compatible PC, and
A set of 3 image files of Mandrake 9.1 for Powermacs
It will take an hour to burn these six files to six CDs to start installation. After the server is set up, the rest can be installed over the network. This would be slow in our present configuration as we have only 10 mb/s, so it may be better to burn a larger number of CDs. The whole process could be done in a day. This software was released this month and is receiving rave reviews from all who try it. It is very easy to install and can be maintinaed via GUI, from the console keyboard or remotely by SSH or https. It is twice the software we have now. It is many times more reliable because it is Open Source.
It is possible to have dual booting of the present OS and Linux but it is difficult to have part of the system running one OS and part the other because of the networking. We could use fixed IP addresses and have both a Linux server and a MacOS server running, but that would be complicated in that people would switch back and forth. With Linux there is no need to reboot except for repartitioning the harddrive or adding internal hardware. With the other OS, it is necessary to reboot frequently. Linux is fast to run but slow to reboot. Rebooting is a waste of time. A way to avoid this struggle on the network is to use a different subnet for Linux and for the other OS. We now use 192.168.0.x. If we use 195.195.195.x for Linux, neither system would be aware of the other. Only the print server needs to reach the printers which are 192.168.0.3 and 192.0.0.192 and the gateway which is 192.168.0.1.
We use a proxy server setup on each browser on each account in the lab. This causes problems because students can tinker with it. We could have the proxy setup centrally located with the Linux server by doing transparent proxying so that all requests to the Linux server would be proxied so we do not have to set proxy addresses on every machine in the lab. Every request for an address not on the network would go to the Linux server and would be converted to a proxy request to 192.168.0.1:9202.
There is no reason that we cannot have two servers running for printing. The printers have built-in servers that can sort out connections. The Linux server can deal with IBM compatibles running XP by using samba. I would have to look into how the user authorization of XP would be handled. Linux can be an NT domain server if needed but that is an unnecessary complication.
It would be very important that machines be properly shut down with open files on the server and machines being rebooted in two operating systems. Linux has a journalling filesystem which usually handles such problems well, but we should not trust our luck.
Conclusion
With some work the network can be improved greatly in its capability and reliability. All the necessary changes could be done in a week with school in operation by making changes on evenings and weekends. A sequence like:
Install a server on one of the new iMacs or, preferably on one of the newer PCs. Include students' and teachers' IDs. Include a user group for each class which would allow sharing files within a group. Install NFS, NIS, BIND, APACHE, DHCP, NTP, transparent proxy, OpenOffice, opera, and serve installation files for Mandrake 9.1 for ppc and i586, OpenOffice, and opera. Start httpd, dhcpd and nfsd exporting /home. Configure networking on one or more subnets. This should take a few hours including creation of users and groups.
Using the server and an installation CD, start the installation of the client computers. One or many can be done at a time. Since we are prepared and have already installed the server, this should take only a few hours. The iMacs are reluctant to release the CD so we may need one copy for each iMac installed at one time. The major effort will be to specify a disc partitioning strategy and to select software to install. On the PCs, the selections can be done from a floppy recorded at the end of the first installation on a client PC so we could do one first and seven or eight others all at the same time if the machines are identical. Printing configuration can be done at installation time. Each machine will be given a name and domain like “machine.fmhs.edu”. Bind can be informed of this scheme if ever we want students to work on a bunch of servers for courses. The lab will be a small model of the www.
By NIS provide user and group authentication on each client computer from the server and edit /etc/fstab to mount the /home directory at boot. Mount /home as root and test the system. This could be done by downloading from the server a script and executing it or even more easily by including an executable file in /home on the server and mounting /home and executing it on each machine at boot time.
#!/bin/bash
cat fstab >>/etc/fstab
cp printcap /etc/printcap #file pointing to the printer server
cp yp /etc/yp.conf
cat rc.local >>/etc/rc.d/rc.local
cp nsswitch /etc/nsswitch.conf
/etc/init.d/portmap start
mkdir /var/yp
cat ypservers /var/yp/ypservers
/etc/init.d/ypbind
# executed as root after mount -t nfs server:/home /home -o rw;cd /home
This step can be done in seconds. The client should be fully functional and ready for ordinary users.