[Pdx-pm] Newbie question about testing and Perl

Keith Lofstrom keithl at kl-ic.com
Thu Jul 13 18:42:07 PDT 2006


On Jul 13, 2006, at 7:38 AM, Keith Lofstrom wrote:
>... it is how we manage to reduce defects below 0.01% on items made  
>of impure real-world semi-deterministic analog goop.  Surely  
>software could do better.

On Thu, Jul 13, 2006 at 09:59:53AM -0700, Randall Hansen wrote:
> IMHO a large part of the reason is economic.  the tester-to-tested  
> unit cost ratio for near-perfect software is less than the seven  
> orders of magnitude in your example, but it's higher than most  
> external or internal clients are willing to pay.  furthermore, since  
> most software is unique, it's a cost that must be paid again and  
> again on every project.
> 
> the most robust software engineering (and i use the term advisedly)  
> process i've heard of is NASA's, for the space shuttle[1].  i do my  
> best to practice what i can call "software engineering," but i'm  
> under no illusions that i've ever developed anything non-trivial with  
> the robustness that NASA take for granted.
> 
> clients want instant gratification with software, and have been  
> conditioned to regard the cost of failure (e.g. BSOD) as (a)  
> unavoidable, and (b) trivial.  in fact it's neither of those things,  
> but nearly any client will pick "cheap and now, with a couple bugs"  
> over "three months from now and better, but still not perfect."

As was pointed out in a similar thread over on plug talk, the economics
usually support, and even demand, careful testing (most particularly at
the specification and prototyping stage).  Software failure is hugely
expensive, see:

http://www.spectrum.ieee.org/sep05/1685/failt1

(Thanks to Andrew Becherer for the pointer)

A recent local example is the poorly tested new web portal for First
Tech Credit Union.   After the high-stress rollout, Mike Osborne, the
49yo CFO of First Tech, died of strep pneumonia at St. Vincent Hospital.
Given his personality type, the events are almost certainly related.
Bad software can kill.  

Most clients don't want "instant gratification", but do have businesses
that rely on software being delivered when promised.  Usually "it is
ready now" is the only promise with a chance of being true.  Certainly
advanced hardware has its own delivery slips, but companies that don't
deliver are discounted in the marketplace, and often discounted into
extinction (Who remembers E Machines graphics boards for the Mac?  
Osborne Computers?).  Hardware is usually produced on spec;  if the
chip doesn't work, you don't get paid.  If the CPU has a bug with FDIV,
you spend a billion dollars replacing them.

On both the client and producer side of the software equation,
expectations are immature;  neither client nor producer understands
how good software can be.  Alan Cooper describes the phenomena in "The
Inmates are Running the Asylum" as "dancing bear-ware";  we are still
surprised that software works at all, so like the dancing bear we do
not expect the performance of a Baryshnikov.  But we *should*.  A
software production process designed to produce Baryshnikovs will be
more expensive per unit output than processes producing dancing bears.
The overall cost per delivered function may not rise much, because a
well tested process will drive shared modularity.  

Maybe most people won't pay anything extra at all for software that
works.  However, businesses that don't care about the accuracy or
productivity of their processes rapidly go bankrupt.  So it is 
arguable that when somebody gets off their butt and actually develops
a probably more robust software development process, they will soon
dominate the market.  

While a newbie, one of the attractions of Perl is a greater focus on
testing than I see in most other language communities.  And that makes
it valuable to user communities where reliability does matter (for 
example, in banking and finance, arguably more sensitive to software
failure than NASA actually is).  Perhaps some metric of software
reliability versus testing can be developed that demonstrates this,
so that institutions like First Tech can convert the abstract goodness
of testing into dollars-and-cents business decisions about scheduling
and cost.

Shared modularity is another attraction of Perl;  the fact that most
software is unique is nothing to brag about.  There is very little
reason for the software that runs at First Tech Credit Union to be
markedly different from that running at Unitus or OnPoint, and it is
easier for the users if all families of banking software behave
alike.  Different organizational business logic will certainly 
change how the modules are configured and interconnected, but the
Banking::Funds::Transfer modules should be the same - and tested 
to the Nth degree.

Since software for the Mars Orbiter cannot be tested with a real Mars,
and software for funds transfer cannot be tested with real funds, 
emulators ( Mock objects? ) are needed.  "Correct by construction"
ain't gonna happen (imagine Elmer Fudd saying "I will be weelly
weelly caeful"), so a complete test environment is necessary. 
Complete as in "all inputs and outputs are emulated or captured").

Hence my surprise on Wednesday night, when Eric did not turn off
the real serial port and turn on the emulated one, and the question
that started this thread.  I will continue with mock objects in
another message.

Keith

-- 
Keith Lofstrom          keithl at keithl.com         Voice (503)-520-1993
KLIC --- Keith Lofstrom Integrated Circuits --- "Your Ideas in Silicon"
Design Contracting in Bipolar and CMOS - Analog, Digital, and Scan ICs


More information about the Pdx-pm-list mailing list