Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] TLUG Site with Hakyll Update



Curt Sampson writes:
 > On 2019-03-18 01:57 +0900 (Mon), Stephen J. Turnbull wrote:

 > > [E]xperience with Python setup.py scripts suggests diminishing
 > > returns set in quickly once you start breaking out pieces of
 > > infrastructure into separate modules.  It becomes like learning a
 > > new language for everyone else (and relearning for you if you
 > > have to come back to it after a few months).
 > 
 > I'd love to hear a talk about this, formal or just over a beer.

Probably over a beer.  setup.py is like debhelper or rpm scripts.
They're all whole DSLs, right?  Not much more to say than that.  If
you do it a lot, you become familiar, but if you don't, it's all APL
to you (although written in Python or Haskell).  Well, a little more:
of course if it's a DSL private to you (eg, in my case it's often a
Make include), that makes it really bad for other people trying to
figure out what's going on (especially if it's going wrong!)

On to CI:

 > But at that point the idea of DevOps was still not widespread, AFIK
 > (and certainly the name didn't exist yet).

True.

 > >>From [Beck XP Explained] Chapter 7, "Primary Practices":
 > 
 > ] __Continuous Integration_

Beck (or someone of that generation) may be the source of the name,
but the description looks like an updated version of Fred Brooks's
discussion of continuous integration (which he recommends against,
subject to your caveat that it's not *real* CI, for the simple reason
that a system build took about a week in OS/360. ;-)

 > ] The most common style of continuous integration is asynchronous.

Beck's description of aspects of continuous integration is new, not in
Brooks.  (Again, because for Brooks truly continuous integration was
simply infeasible, even in the mid-70s when "Mythical Man-Month" was
being composed. :-)  But I will say it sounds a lot like Brooks-style
engineering philosophy, but informed by different experiences and
capabilities.  Specifically, by 1989 the smart people knew a *lot*
more about building distributed systems, and commodity hardware could
effectively run those systems without exploding the network.  People
were a lot more disciplined about APIs, at least if they wanted to
work successfully in large development organizations (and the smart
ones carried that discipline over to small dev orgs! :-).

 > Now back to Steve:
 > 
 > > I'm pretty sure Microsoft "nightly builds" go back to the same era
 > > (at least the early 90s).
 > 
 > So it's debatable whether those nightly builds should really be
 > considered "continuous integration"; you can see that above Beck
 > pretty clearly thought that it wasn't: "Asynchronous integrations are
 > a big improvement on daily builds."

Conceded.  I would say that Brooks probably did not consider the
possibility that week-long builds were an "accidental" difficulty.
(Of course, in his experience there was no "commodity hardware" -- the
hardware was being developed at the same time as the software, which
mostly ran on simulated hardware that differed from both the spec and
the current dev version of the hardware, don't forget the lions and
tigers and bears, oh my![1])

 > [I]n a situation like that I'd be finding further ways to integrate
 > more often, such as building smaller parts (but larger than what
 > just one pair of developers is working on) at least every hour or
 > two, and linking those smaller parts to yesterday's build of all
 > the rest in order to run tests on them.

This is exactly the kind of practice that Brooks described and
recommended against, and Microsoft (much later) engaged in, though.
The nightly builds were something of a check on this.  But in Brooks's
experience, on a large system the simultaneous divergences would bring
everything to a grinding halt far too often.  Of course, integration
itself was problematic, since they didn't have capable source code
management systems (basically repository = FTP site as I understand
Brooks's discussion).  I'm not sure how Microsoft dealt with it (note:
Microsoft *did* have real VCS), but I suspect that it was simply a
matter of proceeding in this way until divergences got too painful,
then there would be a meeting....

 > I'd imagine that a proper integration test would involve booting up a
 > Lisp machine from scratch with the current code. That clearly would be
 > an overnight thing. :-)

That's entirely a joke about general Lisp machine performance, right?
If not, it's a misrepresentation of the "no builds" concept behind the
Lisp machines (load an image of the Lisp data, part of which is a Lisp
program, into memory and call the program).

 > > On the other hand, if you mean more than just the syntax checks that
 > > linkers do, really I would put it much later with the "devops"
 > > movement, when people started putting nightly builds into production
 > > and claiming they passed QA. ;-)
 > 
 > There's pretty much no question that they passed QA, though the wise
 > consumer might want to ask what that QA involved. :-)
 > 
 > But even with no QA beyond "I turned it on and it didn't explode,"
 > this is a surprisingly large step for many organizations, even now. I
 > am still seeing teams regularly ignore deployment until the last minute

No better than grad students doing research and expecting to write a
dissertation in a week, eh?  But devops goes farther than continuous
integration: it's continuous deployment.

 > and then discover it's going to take them two weeks to set up their
 > "completed" software to be usable by the client.

Sure, and this was sort of the Lisp machine concept.  None of the
hardware manufacturers gave a fig about being compatible with each
other, they wanted their platform to be the only one to run a given
system anyway.  So once the system was running on the test box, you're
done -- just copy the image.  That has its flaws, of course (just look
inside any Emacsen, ugh).

I certainly agree that we want our integration tests to go all the way
to deployment, though.  AIUI, the devops admin specialty is all about
developing the personnel to support that discipline in the in-house
"continuous deployment" scenario, which is somewhat different from the
more general "continuous integration" of a shrinkwrapped[2] product
scenario.

 > That said, though it's not explicitly stated in the Beck quote above,
 > that everything should have automated tests is so deeply embedded in
 > XP makes it reasonable to assume that running a real test suite (far
 > beyond "just the syntax checks that linkers do") would be a given.

I thought the motto of XP was "move fast and break things"?  Or am I
confusing that with "EOL is 2001 ... no, 2004 ... no, 2006 ... no,
2010 ... no, 2014 ... no, why are there XP SP2 systems running the
Tokyo Olympics??!"  (Just pulling your leg, of course.)

More seriously (and a rather different topic), how many large
organizations really implement XP or any of these other famous
disciplines?  And does it matter which one you use, given that you've
got a team that's disciplined enough to stick with any of them for
more than a quarter?

Footnotes: 
[1]  Kidder's _Soul of a New Machine_ is an excellent, scary
description of this environment in a later incarnation (Data General).

[2]  Take that seriously -- a product that is a defined entity -- but
not literally (no Saran Wrap needed), please.



Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links