[Pdx-pm] corruption puzzlement - followup

Kyle Hayes kyle at silverbeach.net
Fri Aug 8 12:54:54 CDT 2003

On Friday 08 August 2003 10:00, Michael Rasmussen wrote:
> Kyle,
> Thank you very much for the information on race conditions,
> SCP issues and directory entry creation latency.

I have some bitter experience in the area :-(

> I'd asked my question as a post mortem for the problem we
> observed on Tuesday.  The script was originally created so
> that users could do a Samba mount of the directory and pull
> the file when they received an email stating it was available.


> Now we've moved into a scenario where I move the files to
> other servers for the users and other processes to pick up.
> As a result I've killed the process that monitored the directory
> and moved the EOL conversion into the script that moves the files
> to the NT severs.
> code flow is now:
> opendir(DIR, $incoming_dir);
> @files = grep { /spec of interest/ } readdir(DIR);
> foreach $file (@files) {
> 	# non portable test for open file condition, you're warned
> 	@openfiles = `/usr/sbin/lsof +d $incoming_dir 2>/dev/null`;
> 	if ( ! ( grep (/$file/) @openfiles ) ) {
> 		 proceed with munging and copying and backing up and all;
> 	} else {
> 		twiddle bits for a bit and try again
> 	}
> }
> Since the files are dropped into $incoming_dir by the tranfer process by
> the business partner and aren't touched by anything/one but my process
> I'm confident (let me know if I'm wrong here) that If I catch it on the
> readdir and if it's not open when I test for it then it has arrived on
> site completely and I'm free to do my stuff.   The only race condition I
> see is that the file may actually close between the time I test for it
> being open and when I want to act on it, but that's OK since I'll return
> later.
> Lesson:  Don't throw together a quickie while users are in testing mode
> and leave it there.

The only question I have is whether the file is listed as open (i.e. a file 
descriptor is allocated to some process for it) during the entire creation 

I cannot see why this would not be the case, but I think it would be possible 
that scp could create the file, close it, reopen it and then add the 
contents.    Why someone would code it this way, I can't imagine, but then I 
can't imagine some of the stuff that is apparent in Windows either.

I suspect, as you note, that the only race condition left is not fatal.  If a 
file is open when you check, you'll just get it on the next round and no harm 

Be warned that lsof is not always installed by default.  I know that in many 
versions of Red Hat (6.0 through 7.x I believe, not sure about >8), it was 
not.  I don't think SuSE installed it either, but I cannot remember. I use it 
often to track down processes that leak sockets and/or file descriptors.

> On Thu, Aug 07, 2003 at 05:08:40PM -0700, Kyle Hayes wrote:
> > Change the the program so that you wait until the file is at least 60
> > seconds old (if the longest file takes 40 seconds, give yourself some
> > fudge factor).
> Then on a day of network congestion slowing things down or when the market
> grows crazy and the file size doubles I'm back where I started.  Hence the
> defer action as long as possible approach I eventually took.

Right.  This was my point about this not being a real solution.  It helps the 
problem, but doesn't cure it.  As long as the file is open without any 
interruption from the initial file creation through the final filling in of 
the data, your method effectively uses a lock (the existence of an open fd to 
the file).  This should be quite safe with the above caveat.

> > Also note that if you can stat the file, it is probably there, so the -f
> > may be redundant.
> That was polite. :) Yes, if I can stat it, it will pass the existance test.

I generally reverse the tests.  It is possible to see if a file exists, but 
have stat fail I believe.  You'd need weird permissions to make this happen, 
but I think it is possible.  I know on NT it is possible to have a file 
visible in a directory, but you cannot open it or find its size (i.e. stat 

> > finish and show up in step 3 above.  I've seen up to five second delays
> > on heavily loaded Linux systems running ext2 filesystems.  Ext3 and
> > Reiser running in journalling mode could actually have this problem worse
> > than ext2. The WinNT filesystem can get really weird this way.  On a
> > heavily loaded system, I timed a file taking more than 30 seconds to show
> > up in a directory after a copy operation said it was complete.
> Amazing.  I had no idea.  That's very important.

Yes, people don't often understand that non-atomic operations have a way of 
becoming very visible at the wrong times :-(  I've had to write some pretty 
twisted code just to handle log file rotation, processing and archiving due 
to things like this.  It gets really annoying when you realize that you've 
managed to copy all but the last three lines of a set of log files, for six 

I wrote some Perl code to do replication for MySQL (before native replication 
was complete) and this kind of problem was really painful.  Obviously, if you 
replicate all but the last few lines of SQL, you didn't really make the slave 
the same as the master.  This took us weeks to track down.  Luckily we had no 
failovers before we fixed it!

If something is really important, I use a sentinel at the end of the file or I 
use file locks.  Sentinels are nice because you can just look for them 
without worrying if the file is closed or open still.

> > > ################## start of 2dos ####################
> > > #!/usr/bin/perl -i
> > >  [snip]
> > > ############# end of 2dos #############################
> >
> > Erm, where does the output go?  Are these programs sanitized to protect
> > the innocent?
> You missed the -i on the #! line, an in place edit with no backup.

D'oh!  Yep, I did.  I generally write fairly large Perl programs and use 
warning and tainting flags, so I didn't even think to look there.  Sigh.


More information about the Pdx-pm-list mailing list