Wednesday, August 25, 2010

Maximum argument length in Linux

From a 2004 Slashdot interview with Rob Pike:
I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!
Pike is referring to this problem, most common when a '*' wildcard expands to too many files.  For the examples, I would say that 3a/b might as well be a Perl/Python/Ruby script.  I would also add Example 2b:
% find -X $directory1 -name '*' -depth 1 -type f | xargs mv --target-directory=$directory2
(if --target-directory is available on mv) since xargs already holds the line-length well below ARG_MAX. Or just use the little-known plus sign in find:
% find $directory1 -name '*' -depth 1 -type f -exec mv {} $directory2 +
That might be the fastest solution.

Another idea, from Lyren Brown:
for f in *foo*; do echo $f; done | tar -T/dev/stdin -cf - | tar -C/dest/path -xvf -
Apparently, the latest Linux kernel finally removes any practical limit.

No comments:

Post a Comment