ARG_MAX
| Shells
| whatshell
| portability
| permissions
| UUOC
| ancient
| -
| ../Various
| HOME
$@
"
| echo/printf
| set -e
| test
| tty defs
| tty chars
| $()
vs )
| IFS
| using siginfo
| nanosleep
| line charset
| locale
command: arg list too long
2002-06-06 .. 2013-12-10 (see recent changes)
Here you'll find
ARG_MAX
(with some more details in the footnotes)
You will see this error message, if you try to call a program with too many arguments, that is,
most likely in connection with pattern matching:
$ command *
On some systems the limit is even hit with
"grep pattern /usr/include/*/*
" (apart from that using find would be more appropriate).
It's only the exec()
system call and its direct variants, which will yield this error.
They return the corresponding error condition E2BIG (<sys/errno.h>
).
The shell is not to blame, it just delivers this error to you.
In fact, shell expansion is not a problem, because here exec()
is not needed, yet.
Expansion is only limited by the virtual memory system resources [1].
Thus the following commands work smoothly, because
instead of handing over too many arguments to a new process,
they only make use of a shell built-in (echo
)
or iterate over the arguments with a control structure (for loop):
/dir-with-many-files$ echo * | wc -c /dir-with-many-files$ for i in * ; do grep ARG_MAX "$i"; done
There are different ways to learn the upper limit
getconf ARG_MAX
[2]
sysconf(_SC_ARG_MAX)
[3]
ARG_MAX
in e.g. <[sys/]limits.h>
[4]
(However, on the few system that have no limit for ARG_MAX
,
these methods wrongly might print a limit.)
From Version 7 on the limit was defined by NCARGS
(usually in <[sys/]params.h>
),
Later, ARG_MAX
was introduced with 4.4BSD and System V.
In contrast to the headers, sysconf
and
getconf
tell the limit which is actually in effect.
This is relevant on systems which allow to change it at run time (AIX),
by reconfiguration (UnixWare, IRIX),
by recompiling (e.g. Linux) or by applying patches (HP-UX 10)
- see the end notes for more details.
(Usually these are solutions for special requirements only,
because increasing the limit doesn't solve the problem.)
[1] |
However, in contrast to such expansions (which includes the literal
overall command line length in scripts),
shells do have a limit for the interactive command line length (that is, what you may type in after the prompt). But this limit is shell specific and not related to ARG_MAX .
Interestingly, putenv(3) is only limited by
system resources, too. You just can't exec() anmymore if you
are over the limit.
| ||||
[2] |
4.4BSD BSD and the successors
( NetBSD since 1.0,
OpenBSD 2.0,
FreeBSD 2.0 )
provide: sysctl kern.argmax .
getconf in turn was introduced on BSDs with these versions: NetBSD 1.0, OpenBSD 2.0, FreeBSD 4.8. [3]
|
example usage of | sysconf() :
#include <stdio.h> #include <unistd.h> int main() { return printf("ARG_MAX: %ld\n", sysconf(_SC_ARG_MAX)); } [4]
|
A handy way to find the limits in your headers, if you have cpp(1) installed,
| (inspired by Era Eriksson's page about ARG_MAX ):
|
When looking at ARG_MAX/NCARGS
, you have to
consider the space comsumption by both argv[]
and
envp[]
(arguments and environment).
Thus you have to decrease ARG_MAX
at least by
the results of "env|wc -c
" and "env|wc -l * 4
" [5]
for a good estimation of the currently available space.
[5] |
Every entry in envp is terminated with a null byte.
The env utility adds a terminating newline instead, so the result of "wc -c" is the same.
"wc -l" in turn accounts for the number of pointers in envp, i.e., usually 4 bytes each, according to sizeof().
Some modern shells allow for exporting functions to the environment.
The above slightly miscalculates then,
|
POSIX suggests to subtract 2048 additionally so that the process
may savely modify its environment. A quick estimation with the getconf command:
(all the calculations inspired by a post from Gunnar Ritter in de.comp.os.unix.shell, <3B70A6AD.3L8115910@bigfoot.de>)
expr `getconf ARG_MAX` - `env|wc -c` - `env|wc -l` \* 4 - 2048
or, if you even want to consider wrapped functions or variable values [5],
expr `getconf ARG_MAX` - `env|wc -c` - `env|egrep '^[^ ]+='|wc -l` \* 4 - 2048 ,
|
exec()
with increasing
length of arguments until it fails.
envp[]
is considered automatically, and the result is reliable.
"Checking for maximum length of command line arguments..."
.
It works quite similar.
In a loop with increasing n, the check tries an exec()
with an
argument length of 2n (but won't check for n higher than 16, that is 512kB).
The maximum is ARG_MAX/2
if ARG_MAX
is a power of 2.
Finally, the found value is divided by 2 (for safety), with the reason
"C++ compilers can tack on massive amounts of additional arguments".
command *
fails, then you can
for i in *; do command "$i"; done
(simple, completely robust and portable, may be very slow)
printf '%s\0' *|xargs -0 command
(works only if printf is a built-in, but then it can be much faster on high counts. thanks to Michael Klement)
find . -exec command {} \;
(simple, completely robust and portable, may be very slow)
find . -exec command {} +
(optimizes speed)
find . -print0|xargs -0 command
(optimizes speed, if find doesn't implement "-exec +" but knows "-print0")
find . -print|xargs command
(if there's no white space in the arguments)
find . ! -name . -prune [...]
"
cd /directory/with/long/path; command *
command [a-e]*; command [f-m]*
; ...
argv[]
.
do_execve()
in fs/exec.c
tests if the number exceeds
PAGE_SIZE*MAX_ARG_PAGES-sizeof(void *) / sizeof(void *)
On a 32-bit Linux, this is ARGMAX/4-1
(32767).
This becomes relevant if the average length of arguments is smaller than 4.
Since Linux 2.6.23, this function tests if the number exceeds MAX_ARG_STRINGS
in <linux/binfmts.h> (2^32-1 = 4294967296-1).
And as additional limit since 2.6.23, one argument must not be longer than MAX_ARG_STRLEN
(131072).
This might become relevant if you generate a long call like "sh -c 'generated with long arguments'
".
(pointed out by Xan Lopez and Ralf Wildenhues)
ARG_MAX
(or NCARGS
)The maximum length of arguments for a new process is varying so much among unix flavours, that I had a look at some systems:
System | value | getconf available | default value determined by |
---|---|---|---|
non-competitive: 1st edition (V1) | 255+? [1stEd] | experiments | |
non-competitive: V4, V5 and V6 | 512 | documentation of exec(2) in
V4,
V6
and (no manual) sys1.c in
V5
| |
Version 7,
3 BSD, System III, SVR1, Ultrix 3.1 | 5120 | NCARGS in <sys/param.h>
| |
4.0/4.1/4.2 BSD | 10240 | NCARGS in <sys/param.h>
| |
4.3 BSD / and -Tahoe | 20480 | NCARGS in <sys/syslimits.h>
| |
4.3BSD-Reno, 4.3BS-Net2
4.4 BSD (alpha/lite/encumbered), 386BSD*, NetBSD 0.9, BSD/OS 2.0 | 20480 | ARG_MAX in <sys/syslimits.h> (NCARGS in <sys/param.h>)
| |
POSIX/SUSv2,v3,v4 [posix] | 4096 (minimum) | + | minimum _POSIX_ARG_MAX in <limits.h> , ARG_MAX
|
AIX 3.x, 4.x, 5.1[aix5] | 24576 | + | ARG_MAX in <sys/limits.h> (NCARGS in <sys/param.h>)
|
AIX 6.1 | 1048576 | + | online documentation (ARG_MAX in <limits.h>)
|
BSD/OS 4.1,
NetBSD 1.0+x, OpenBSD x: | 262144 | + | ARG_MAX (/NCARGS ) in <sys/syslimits.h>
|
Cygwin 1.7.7 (win 5.1) [cygwin] | 30000 | ARG_MAX in <limits.h>
| |
Dynix 3.2 | 12288 | ARG_MAX in <(sys/)limits.h> (NCARGS in <sys/param.h>)
| |
EP/IX 2.2.1AA: | 20480 | ARG_MAX in <sys/limits.h>
| |
FreeBSD 2.0-5.5 | 65536 | + | ARG_MAX (/NCARGS ) in <sys/syslimits.h> [freebsd]
|
FreeBSD 6.0 (PowerPC 6.2, ARM 6.3) | 262144 | + | ARG_MAX (/NCARGS ) in <sys/syslimits.h> [freebsd]
|
GNU Hurd 0.3 Mach 1.3.99 | unlimited [hurd]
(stack size?) | + | |
HP-UX 8(.07), 9, 10 | 20478 | + | ARG_MAX in <limits.h>
|
HP-UX 11.00 | 2048000 [hpux] | + | ARG_MAX in <limits.h>
|
Interix 3.5 | 1048576 | + | - |
IRIX 4.0.5 | 10240 | NCARGS in <sys/param.h> (fallback: ARG_MAX in <limits.h>: 5120)
| |
IRIX 5.x, 6.x | 20480 [irix] | + | (fallback: ARG_MAX in <limits.h>: 5120)
|
Linux -2.6.22 | 131072 | + | ARG_MAX in <linux/limits.h> [linux-pre-2.6.23]
|
Linux 2.6.23 | (1/4th of stack size) | + | kernel code [linux-2.6.23] |
MacOS X 10.6.2 (xnu 1486.2.11) | 262144 | + | ARG_MAX (/NCARGS ) in <sys/syslimits.h>
|
MUNIX 3.2 | 10240 | ? | ARG_MAX in <sys/syslimits.h>
|
Minix 3.1.1 | 16384 | ARG_MAX in <limits.h>
| |
OSF1/V4, V5 | 38912 | + | ARG_MAX in <sys/syslimits.h>
|
SCO UNIX SysV R3.2 V4.0/4.2
SCO Open Desktop R2.0/3.0 | 5120 | ? | online documentation |
SCO OpenServer 5.0.x [osr5] | 1048576 | + | (fallback: ARG_MAX in <limits.h>: 5120)
|
UnixWare 7.1.4,
OpenUnix 8 | 32768 [uw/osr6] | + | (fallback ARG_MAX in <limits.h>: 10240)
|
SCO OpenServer 6.0.0 | 32768 [uw/osr6] | + | (fallback: ARG_MAX in <limits.h>: 10240)
|
SINIX V5.2 | 10240 | ? | ARG_MAX in <limits.h>
|
SunOS 3.x | 10240 | ? | ARG_MAX in <sys/param.h>
|
SunOS 4.1.4 | 1048576 | NCARGS in <sys/param.h> , sysconf(_SC_ARG_MAX)
| |
SunOS 5.x (32bit process) | 1048320 [sunos5] | + | ARG_MAX in <limits.h> (NCARGS in <sys/param.h>)
|
SunOS 5.7+ (64bit process) | 2096640 [sunos5] | + | ARG_MAX in <limits.h> (NCARGS in <sys/param.h>)
|
SVR4.0 v2.1 (386) | 5120 | ? (no ARG_MAX/NCARGS in in <limits.h>/<sys/param.h>)
| |
Ultrix 4.3 (vax / mips) | 10240 / 20480 | NCARGS in <sys/param.h>
| |
Unicos 9,
Unicos/mk 2 | 49999 | + | ARG_MAX in <sys/param.h>
|
UnixWare 7: see OpenServer 6 | |||
UWIN 4.3 AT&T Unix Services for Windows | 32768 | + | ARG_MAX in <limits.h>
|
[posix] | See the online documentation (please register for access) for getconf and <limits.h>. |
[osr5] | Bela Lubkin points out:
The limit on SCO OpenServer 5.0.x is set by
' # scodb -w scodb> maxexecargs=1000000 scodb> q (0x1000000 = 16MiB.) This is the max size of a new temporary allocation during each exec(), so it's safe to change on the fly. Exceeding the limit generates a kernel warning: WARNING: table_grow - exec data table page limit of 256 pages (MAXEXECARGS) exceeded by 1 pages WARNING: Reached MAXEXECARGS limit while adding arguments for executable "ls" Some `configure` scripts trigger this message as they deliberately probe the limit.
|
[uw/osr6] | The limit on UnixWare can be increased by changing the kernel parameter
ARG_MAX with /etc/conf/bin/idtune ,
(probably in the range up to 1MB) regenerating the kernel with " etc/conf/bin/idbuild -B "
and rebooting.
See also the online documentation. On UnixWare 7.1.4, the run time limit for a default install of "Business Edition" is 32768. Bela Lubkin points out, that, very basically, OpenServer 6 can be described as a UnixWare 714 kernel with the OpenServer 5.0.7 userland running on top of it. |
[irix] | The limit on IRIX can be changed by changing the kernel parameter
ncargs with systune
(in the range defined in /var/sysgen/mtune/kernel, probably varying from 64KB to 256KB), regenerating the kernel with " autoconfig " and rebooting.
See also the online documentation of
systune(1M)
and intro(2).
|
[aix5] | The limit on AIX 5.1 can be changed at run time with
"chdev -l sys0 -a ncargs=value ",
in the range from 6*4KB to 1024*4KB.
See also the online documentation for chdev (AIX documentation, Commands reference). |
[freebsd] | Interesting and everything but academic was the reason for
the first of two increases (40960, 65536) on FreeBSD:
"Increase ARG_MAX so that `make clean' in src/lib/libc works again. (Adding YP pushed it over the limit.)"quoted from http://www.FreeBSD.org/cgi/cvsweb.cgi/src/sys/sys/syslimits.h |
[linux-pre-2.6.23] | On Linux, the maximum almost always has been PAGE_SIZE*MAX_ARG_PAGES
(4096*32) minus 4.
However, in Linux-0.0.1, ARG_MAX was not known yet,
E2BIG not used yet and exec() returned -1 instead.
With linux-0.10 it returned ENOMEM and
with Linux-0.99.8 it returned E2BIG .
ARG_MAX was introduced with linux-0.96, but it's not used in the kernel code itself. See do_execve() in fs/exec.c on
http://www.oldlinux.org/Linux.old/.
If you want to increase the limit, you might succeed by carefully
increasing |
[linux-2.6.23] | With Linux 2.6.23, ARG_MAX is not hardcoded anymore.
See the git entry.
It is limited to a 1/4-th of the stack size ( ulimit -s ),
which ensures that the program still can run at all.
See also the git diff of fs/exec.c
getconf |
[sunos5] | On SunOS 5.5, according to <limits.h>, ARG_MAX is 1M,
decreased by the following amount:
"((sizeof(struct arg_hunk *))*(0x10000/(sizeof)(struct arg_hunk))) space for other stuff on initial stack like aux vectors, saved registers, etc.."On SunOS 5.9 this reads "ARG_MAX is calculated as follows: NCARGS - space for other stuff on initial stack like aux vectors, saved registers, etc.."and <sys/param.h> defines NCARGS32/64 to 0x100000/0x200000
with NCARGS being substited at compile time.
ARG_MAX is not
calculated in the header files but is set directly in <limits.h>,
also substitued at compile time from _ARG_MAX32/64 .
SunOS 5.7 is the first release to support 64bit processes. |
[hpux] | HP-UX 11 can also run programs compiled on HP-UX 10.
Programs which have ARG_MAX compiled in as buffer length
and copy from argv[]/envp[] without boundary checking might
crash due to the increased ARG_MAX .
See devresource.hp.com |
[hurd] | NCARGS in contrast is arbitrarily set to INT_MAX (2147483647) in <i386-gnu/sys/param.h>
The reason: " ARG_MAX is unlimited, but we define NCARGS for BSD programs that want to compare against some fixed limit. "
I don't know yet, if there are other limits like the stack. |
[cygwin] | ARG_MAX 32000 was added to <limits.h> on
2006-11-07.
It's a conservative value, having in mind the windows limit of 32k.
However, the cygwin internal limit, that is, if you don't call non-cygwin binaries, is much higher. |
[1stEd] | By judging from experiments in the simh emulator with
1st edition kernel and 2nd edition shell,
the results are somewhat undefined.
If the length or number of arguments (there is no environment yet) is too high, data corruption may occur, including a kernel crash. The following may or may not indicate the nature of limits:
By calling a script which just echoes its arguments ("sh s arguments"), I found:
|
Thanks to Gunnar Ritter for the test results from UnixWare and OpenUnix.
Thanks to Rodolfo Martín for access to OpenServer 6.
comments to mascheck@in-ulm.de
<http://www.in-ulm.de/~mascheck/various/argmax/>