I don't have that problem but I had a thought about the split directory issue. I'm running Linux 2.4 kernel
and using the Reiser journaling file system. Unlike ext2 - the reiser files system stores it's directory
information in a b-tree database. And because of this it performs much faster than ext2 on certian operations.
Of of the places that reiser really helps a lot is the ability to find and open files in very large
directories. After reading some of the messages here - 1,000,000 spooled files - that I thought I'd mention
this as something that maybe someone might try.
More about Reiser at
http://www.namesys.com
I personally am running Reiser or 4 servers and I love it! One of the things I like the best is that I can
jerk the cord out of a running computer and it only takes 15 seconds to recover on reboot. But I don't have a
million files in a directory to really test it.