[Edinburgh-pm] Embarrassingly n00bish question

Aaron Crane perl at aaroncrane.co.uk
Sun Jul 10 11:16:51 PDT 2011


Miles Gould <miles at assyrian.org.uk> wrote:
> This situation is no doubt terrible in even more ways than I've worked
> out for myself. Surely there must be a Better Way. So, how do you all
> handle this situation?

I do the following:

1. My ~/usr is under revision control.[1]

2. ~/usr/bin contains private scripts — things that aren't worth
open-sourcing, for one reason or another.

3. ~/usr/bin/usrdeploy is a program that symlinks dotfiles from
~/usr/etc into ~.

4. My ~/.bash_profile (installed from ~/usr/etc/bash/profile) adds
~/usr/bin (and other directories as needed) to $PATH.

5. Some of the scripts in ~/usr/bin use libraries from ~/usr/lib.
Those that do begin with the incantation `use FindBin; use lib
"$FindBin::Bin/../lib;"`, so they automatically find the nearby
libraries.[3]

6. ~/usr/t contains some tests, which I can run with `cd ~/usr &&
prove -l t`.[4]

Note that nothing is added to $PERL5LIB or similar, or indeed to
anything other than $PATH; this makes such variables available for use
by any insane piece of software that wants to blindly overwrite them
for its own nefarious purposes.

Item (5) in my list means that it's important for ~/usr/bin to be in
$PATH directly, rather than having its contents symlinked into (say)
~/bin.  If that weren't the case, FindBin would pick the ../lib
relative to the symlinked directory, which presumably doesn't contain
the necessary libraries.  That could be fixed by symlinking ~/usr/lib
into ~/bin, but I'm not convinced that doing so would be a useful
change.

In general, my approach to ~/usr implies that anything I want to use
in my personal environment must be either run directly from ~/usr, or
installed system-wide; further, my preferences mean that "installed
system-wide" typically means "installed as a Debian package".  That
was a reasonable decision when I made it in 2003, but it does have a
deficiency: in practice, it's too hard to extend my environment with
random wee things (from Github, CPAN, etc) that aren't big or
well-known enough to have Debian packages.  (This would perhaps also
be a problem for your various things in ~/src/monkey, if you wanted to
adopt my approach.)  My vague plan is to extend usrdeploy to handle
such things, but I haven't got there yet.

> What awesome tools should I be using to manage
> my dists? What should I read to better understand how Perl dists are
> meant to work?

I'm not quite sure what you're asking there.  I take it you don't mean
things like Dist::Zilla?

Also, I'm eagerly following Miyagawa's work on Carton, which looks
like it may help enormously with the "aggregating third-party
software" part of the problem:

https://github.com/miyagawa/carton

[1] That's been true for nearly eight years by now, and doing that was
one of the best decisions I've ever made.  Every motivation for using
revision control for software development also applies to your
dotfiles.  And in addition, note that every change you make to your
dotfiles is in practice tried out on the machine where you first
discovered a need for it, before being rolled out to everywhere else
you log in.  That is, dotfile development is almost always
"concurrent", in the sense used by CVS.  So pseudo-revision-control —
tarball archiving or similar — just doesn't work well.  Not to mention
that, given only a Git client, a minute or so is long enough to bring
up my personal environment on a new account: `git clone
me at myserver.example.com:git/usr.git && usr/bin/usrdeploy`, followed by
logging out and back in again.

[2] It's a little more complicated than that; some dotfiles are
effectively templated to take OS differences into account, and some
have to be copied so that they can be mode 600 as required by the
program that uses them, and so on.  But that's the basic idea.

[3] This may not be entirely reliable in some weird situations,
because FindBin is designed to do almost exactly the wrong thing: it
prays that the full pathname to the script is in argv[0] (false in
some circumstances and on some OSes), or if not, that the script was
found in one of the directories in $PATH (which is clearly
ridiculous).  This is particularly bizarre given that Perl knows full
well the name of any file it's reading, and provides it in both
`__FILE__` and `(caller $n)[1]`.  The correct approach is more like

use Path::Class qw<file>;
use lib file(__FILE__)->dir->subdir('lib')->stringify;

but that relies on having Path::Class installed before I can run any
script that uses one of my own libraries (and I do use this setup on
the very occasional servers where I don't have root access, so this
matters to me).  Since Perl ships with FindBin, and it mostly works
fine in most situations I care about, this is a reasonable trade-off
for me.  If Perl had __DIR__ to go along with __FILE__, or it shipped
with Dir::Self, or I cared less about being able to bootstrap some of
my tools without installing any extra software, the trade-off would
change.

[4] Using the `-l` option means my test scripts don't need the FindBin
incantation.

-- 
Aaron Crane ** http://aaroncrane.co.uk/


More information about the Edinburgh-pm mailing list