[Wellington-pm] pass-through pipe?

Daniel Pittman daniel at rimspace.net
Sun Jul 18 00:20:27 PDT 2010


Richard Hector <richard at walnut.gen.nz> writes:
> On Sun, 2010-07-18 at 17:06 +1200, Lenz Gschwendtner wrote:
>> you mean something other than one process prints to STDOUT and the other
>> one reading from STDIN ... 
>> 
>> not sure i get your question right but if you need something other than the
>> normal stuff then you might be after a FIFO buffer? shared memory? what is
>> the use case? probably that helps :-)
>
> The use case is encrypting postgresql backups:
> system "pg_dump --cluster $cluster --dbname $dbname ... | gpg
> --options ... --encrypt";
>
> Currently I'm doing that in one 'system' line as described, but that
> fires up a shell, which shouldn't be necessary?
>
> I think if I was doing it in C, I'd call pipe(2), which would return 2
> file handles (one reader and one writer).

Er, just for reference, despite my thinking that IPC::Run is a better
solution, you could just write this exactly like you would in C.  No,
seriously, it would pretty much just work.

See the perldoc for pipe, fork, exec, and open to see how to do it.

> I'd then fork once for pg_dump, and substitute the writer filehandle for
> stdout, then exec pg_dump.  I'd then fork again for gpg, and substitute the
> reader filehandle for stdin, then exec gpg.
>
> I haven't tried any of that, though - I'm guessing as to what might
> happen when a shell executes a pipeline like that. I could go read the
> bash source, I guess ...

No, that is it.  You got it right — this is how Unix does this stuff.

        Daniel
-- 
✣ Daniel Pittman            ✉ daniel at rimspace.net            ☎ +61 401 155 707
               ♽ made with 100 percent post-consumer electrons


More information about the Wellington-pm mailing list