I/O status Re: SPUG: proper checking of file status after read?

Ben Reser ben at reser.org
Fri Sep 19 11:24:46 CDT 2003


On Fri, Sep 19, 2003 at 03:49:21AM -0700, Fred Morris wrote:
> There are times I consider documentation to be the final arbiter, and
> times I consider behavior paramount.
> 
> I observe that perlvar seems to ignore status checking on read in its
> examples.

Yeah, because as I pointed out it's usually not an issue at all.

> I consider it a success because the failure to read data was not due to a
> failure of the storage management layer. Is there another language which
> considers end-of-file on initial read to be a storage layer failure?
> (Incidentally, I would say that Perl does not consider it an error
> condition either.) Attempting to read beyond EOF is considered an error in
> many languages; languages which do so on initial read typically provide an
> EOF test, used as until EOF() read(). Try for mainstream here; provide an
> example or be discarded. Even with SQL, reading 0 records returns an empty
> set.. not an error. man errno(3) lists ENOTTY as Inappropriate I/O control
> operation, but there is no error for end of file or empty file. BTW,
> attempting to read beyond EOF in Perl 5.8 reads 0 lines and does not set
> $!. That's really a side issue, I'm not going to go back and test in other
> versions.

I understand what you mean, but you're trying to separate the read from
what you're doing with it.  The code I provided is not written that way.
No data returned from the RHS of an assignment is an error.  If I wasn't
trapping it then normaly you wouldn't notice it, because as with most
errors perl simply sets the left hand side to undef.

It looks like blaming it on the file system was incorrect.  I'll get to
why it's happening in a little bit.  I jumped to that conclusion on my
cursory testing here and past experience with file systems behaving
slightly differently in some situations.

> I can tell this might be the start of a religious difference of opinion, so
> I'm simply going to state mine and move on: reading 0 bytes from a file is
> not an error (at least initially) if the file is empty and there is no
> provision for testing for EOF, and the reason for that is so that reading
> bytes (or the lack thereof) is distinguishable from a storage management
> layer error... unless the storage management layer chooses to return an end
> of file condition as an error, of course.

I don't think it's religious.  I think you're looking at it from a
different perspective.  My thought is why would you open a file to read
nothing?  Your thoughts seem to be, that since the read didn't fail per
se, then there's no error.  The difference is I'm looking from it at
your applications standpoint and you're looking at it from the system
calls standpoint.  Frankly, I don't generally look at things from the
system calls standpoint unless:
a) I'm having an issue with making a system call work.
b) I'm writing a system call.

> Apparently they are in this case. Perhaps it's Linux? No, just tried it on
> a Solaris box with 5.5.3, and $! was false. Anybody else out there care to
> try and flesh out the failure matrix?

Actually it's coming from the fact that $! is undefined unless there has
been an error.  There was an error but the particular error I trapped
doesn't set errno.  As a result it's getting an errno from a prvious
call.  At any rate it looks like in 5.8.1 they cleaned up this
particular issue, because after the open $! is set back to zero.

In 5.8.0 it's coming from this:
ioctl(3, TCGETS, 0x7ffff078)            = -1 ENOTTY (Inappropriate ioctl
for device)

on 5.8.1 it does this:
ioctl(4, SNDCTL_TMR_TIMEBASE, 0xbffff420) = -1 ENOTTY (Inappropriate
ioctl for device)

But $! is 0 when it dies due to the assignment not setting anything.

Technically, it looks like they fixed open to set errno back to 0 if it
was successful.  But the documentation is still correct because you
can't assume that other system calls will do this.  Indeed we know many
of them do not.  Since those system calls are not part of perl and $! is
just an interface to errno there's not much they can do about that.  At
least not in all cases.

> OK, I understand: your argument is that Perl has been "fixed" so that it
> conforms to the documentation. That conforms to the matrix thus far, at any
> rate. (Or taking it at face value, this three line program is generating a
> valid Inappropriate ioctl error unrelated to reading an empty file? No,
> just tested that and in fact $! is being set to Inappropriate ioctl by the
> open, it's false prior to that; this is in spite of the fact that open
> returns successfully.) (So Perl has been "fixed" to conform to the
> documentation by having open set $! to an error number while returning
> successfully? Occam, sharpen your razor!)

I doubt it's been "fixed" that way.  I'd just imagine prior to 5.8.1 it
wasn't unsetting errno.  The difference on other platforms (Solaris) may
simply be that it isn't calling an ioctl that's inappropriate on that
platform.

> I need a reliable mechanism for determining I/O status; I am appealing for
> suggestions to this end. I am profoundly uninterested in continuing to
> bloviate on the Perlishness of Perl, except insofar as it may provide a
> means to refine this appeal.

If you really don't want to consider a read of an zero length file an
error, which may or may not be an issue for your app....  Then you'll
have to do something like this:

open(TEST, 'perlfiletest') or die "Couldn't open perlfiletest: $!";
until (eof(TEST)) {
  my $buffer;
  read(TEST,$buffer,1024) or die "Read error on perlfiletest: $!";
#  mystuff with file contents goes here
}

I went with the more perlish @foo = <FOO> or die previously because it
was simpler and more perlish.  I really didn't think that reading an
empty file would be an issue for you.

> The section in perlvar on _Error Indicators_ states that $! corresponds to
> errors detected in the C library. perlvar says about $!: 'You can assign a
> number to $! to set errno if, for instance, you want "$!" to return the
> string for error n, or you want to set the exit value for the die()
> operator." Should I set it to undef or 0 prior to the loop? It does
> eliminate the Inappropriate ioctl message when used on the test program.
> 
> Is this best practice? As good as it gets? Can anybody else confirm this?

I think it's a bad idea because it is bound to break under some
conditions.  If you really want to test the reads for errors you'll have
to do them yourself rather than doing it in a perlish way.

> Have I been enlisted in yet another snark hunt because "it's not a problem
> with Perl" that $! is not being altered by the read, and that "yes it is a
> problem with Perl" that open is returning success (and successfully) yet
> leaving $! set to Inappropriate ioctl (which it does even when the file is
> *not* empty, for anybody who came in late or has a short attention span..
> and I retested this just now with the test program to be sure)? Is that
> cause for concern? (Occam? Occam?)

I don't think it's a problem with perl.  The manual already tells you
that $! isn't likely to be unset by a successful call.  But at any rate
it does not exhibit the same behavior on 5.8.1.  So it really doesn't
matter now.

On another matter.  I don't know if it's just me but your replies seem
to have a rather arrogant tone to them.  I don't think this is your
intention, but it certainly is the way it is coming across to me.  It's
usually not a good way to get help with something by coming across that
way.

-- 
Ben Reser <ben at reser.org>
http://ben.reser.org

"Conscience is the inner voice which warns us somebody may be looking."
- H.L. Mencken



More information about the spug-list mailing list