bash: /dev/stderr: Permission denied

All we need is an easy explanation of the problem, so here it is.

After upgrading to a new release version, my bash scripts start spitting errors:

bash: /dev/stderr: Permission denied

in previous versions Bash would internally recognize those file names (which is why this question is not a duplicate of this one) and do the right thing ™, however, this has stopped working now. What can I do to be able to run my scripts again successfully?

I have tried adding the user running the script to the group tty, but this makes no difference (even after logging out and back in).

I can reproduce this on the command line without problem:

$ echo test > /dev/stdout
bash: /dev/stdout: Permission denied
$ echo test > /dev/stderr
bash: /dev/stderr: Permission denied
$ ls -l /dev/stdout /dev/stderr
lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stdout -> /proc/self/fd/1
$ ls -lL /dev/stdout /dev/stderr
crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stderr
crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stdout
$ echo $BASH_VERSION
4.2.24(1)-release

On an older system (Ubuntu 10.04):

$ echo $BASH_VERSION
4.1.5(1)-release

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

I don’t think this is entirely a bash issue.

In a comment, you said that you saw this error after doing

sudo su username2

when logged in as username. It’s the su that’s triggering the problem.

/dev/stdout is a symlink to /proc/self/fd/1, which is a symlink to, for example, /dev/pts/1. /dev/pts/1, which is a pseudoterminal, is owned by, and writable by, username; that ownership was granted when username logged in. When you sudo su username2, the ownership of /dev/pts/1 doesn’t change, and username2 doesn’t have write permission.

I’d argue that this is a bug. /dev/stdout should be, in effect, an alias for the standard output stream, but here we see a situation where echo hello works but echo hello > /dev/stdout fails.

One workaround would be to make username2 a member of group tty, but that would give username2 permission to write to any tty, which is probably undesirable.

Another workaround would be to login to the username2 account rather than using su, so that /dev/stdout points to a newly allocated pseudoterminal owned by username2. This might not be practical.

Another workaround would be to modify your scripts so they don’t refer to /dev/stdout and /dev/stderr; for example, replace this:

echo OUT > /dev/stdout
echo ERR > /dev/stderr

by this:

echo OUT
echo ERR 1>&2

I see this on my own system, Ubuntu 12.04, with bash 4.2.24 — even though the bash document (info bash) on my system says that /dev/stdout and /dev/stderr are treated specially when used in redirections. But even if bash doesn’t treat those names specially, they should still act as equivalents for the standard I/O streams. (POSIX doesn’t mention /dev/std{in,out,err}, so it may be difficult to argue that this is a bug.)

Looking at old versions of bash, the documentation implies that /dev/stdout et al are treated specially whether the files exist or not. The feature was introduced in bash 2.04, and the NEWS file for that version says:

The redirection code now handles several filenames specially:
/dev/fd/N, /dev/stdin, /dev/stdout, and /dev/stderr, whether or not
they are present in the file system.

But if you examine the source code (redir.c), you’ll see that that special handling is enabled only if the symbol HAVE_DEV_STDIN is defined (this is determined when bash is built from source).

As far as I can tell, no released version of bash has made the special handling of /dev/stdout et al unconditional — unless some distribution has patched it.

So another workaround (which I haven’t tried) would be to grab the bash sources, modify redir.c to make the special /dev/* handling unconditional, and use your rebuilt version rather than the one that came with your system. This is probably overkill, though.

SUMMARY :

Your OS, like mine, is not handling the ownership and permissions of /dev/stdout and /dev/stderr correctly. bash supposedly treats these names specially in redirections, but in fact it does so only if the files don’t exist. That wouldn’t matter if /dev/stdout and /dev/stderr worked correctly. This problem only shows up when you su to another account or do something similar; if you simply login to an account, the permissions are correct.

Method 2

Actually the reason for this is that udev specifically sets the permissions to 0620 on tty devices and su does not change either the ownership or permissions nor should it. In my view this leaves us in a situation that makes /dev/std* non-portable.

The simple solution is to put “mesg y” in /etc/profile (or whatever top level profile you like to use) as this changes the permissions of your tty device to 0622. I don’t really like that but it is probably better than changing the udev rules.

Method 3

This is an old question, but I think still very relevant, hence here is solution I found that is not mentioned here.

I came across this question as I also have seen issues in an ‘su’ switched account executing command like this in bash:

echo test > /dev/stderr

Since all I am trying to do is redirect stdout to stderr, the following achieves the same, and it works even in ‘su’ switched account:

echo test 1>&2

Method 4

Because I wanted the same lines to go to the console (or log-file, if redirected), where the user will see them verbatim, and to a filter, that would parse them, I used somecommand | tee /dev/stderr | myfilter.

This broke, when a user attempted to run my script after sudo-ing…

The work-around is to check, whether /dev/stderr is writeable — which it will be if

  • there’s been no sudo involved, or
  • if the standard error is being redirected to a file

So here is, what I do:

if [ -w /dev/stderr ]
then
        STDERR=/dev/stderr
else
        STDERR=/dev/tty # The only reason is that stderr is a link to console
fi
...
somecommand | tee $STDERR | myfilter

Method 5

As a long time Linux user I recently set up a new Ubuntu 64 bit 12.04 LTS system and was mystified as to why my bash scripts weren’t working — permission denied — and I subsequently found this thread. I thought the problem might be due to something in the new OS.

In the end it turns out that I’d stupidly used a UI tool to set permissions on my /home directory, and the problem was drive-related. To be sure, I created a temp directory on /opt and found my scripts would run just fine from there. Once I fixed the /home drive permissions everything was back to normal.

One little mystery solved. sigh

Method 6

This solved this problem for me:

sudo chown -R $(whoami) /dev

It obviously needs to be run each time you su to another user.

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply