There's discussion of it at http://www.linuxjournal.com/article/2156, including a work around:
sleep 9999999 > pipe &
You could probably come up with something more clever than that, but the general idea seems to be that the pipe closes when all the writers are done, this keeps it open. I'd imagine a simple C program to fork off, open the file, and then while(1) { sleep(99999);} would be good enough.
Sean
On Sat, Jul 31, 2010 at 6:18 PM, Adam Thompson athompso@athompso.netwrote:
Inside a shell script, I want to start a persistent mysql client process and feed it commands one by one.
I’d like to use a named pipe, but the only way I’ve found to do it so far is with a subshell, by putting 99% of the script inside the subshell and piping the subsell’s entire output to mysql. While this more-or-less works, it’s very much not ideal.
If I use a named pipe, every time I echo something to that pipe (e.g. “echo INSERT INTO… > /tmp/mypipe”), mysql immediately exits upon reaching EOF, so only the very first command gets executed.
I recall posting a way to accomplish this with psql & inetd a year or two ago, but I don’t want to use inetd – this **must** be entirely self-contained within a single shell script. Also I can’t figure out a way to tell mysql to ignore EOF on STDIN and to immediately reopen it.
Anyone have any better ideas?
Thanks,
-Adam
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable