Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] rewriting a few million lines of Fortran code (was Re: Fortran --> Python (was linux engineer))



> >>   So... Who is going to re-write a few million lines of 
> >>   Fortran code into Python? (And why?!)
> 
> In this context, the reason for rewriting in a higher level language 
> would be to make it easier to verify and to change.  It also makes it 
> easier to separate the low level optimization from the hight level 
> mathematical and algorithmic choices.

     I'm not sure I gave a clear enough explanation of the nature
of this kind of software [in astronomy/astrophsics]. 

     Typically, this kind of numerical modelling is 'in house'.
It is decidedly not software for general consumption. This is partly
because the algorithms have been developed for a highly specialized
problem. This means (1) it has very limited applicability -- it has
been 'optimized' for this one particular case and cannot easily be
modified to other cases; (2) not very many people understand the detailed
physics underlying the code. In my case, for example, there was never
a question of my looking over the code, partly because of the nature of the 
collaboration ("we've got the data, you've got the models"), and partly 
because it would have been pointless: I was simply not versed well 
enough in the esoterica of solving the equation of radiative transfer 
under the conditions of non-local thermodynamic equilibrium and blah blah 
blah... (Oh sure, as part of my graduate education I went through the sheer, 
abject hell of "Radiative Transfer", so I had a general idea of what 
was going on. But trying to understand the detailed method of solution 
under those circumstances would have been something like trying to read 
Mishima in Japanese after I'd mastered katakana and hiragana.) 

     "So how  do you know if what they did is right?" In other words,
how do you 'verify' it? That part is not so hard. At least, it was very 
simple to verify that the model *isn't* robust: We knew, via our data, the 
answer they had to get. If their grid of models could not be made to
satisfactorily match our data, it's back to the drawing board for them.

     As for API's and documentation, that sort of information was
written up in a few papers in highly technical journals. (Once you get 
past the introduction, more sheer hell.) Of course, there is a community 
that specializes in this sort of thing, so they would be quite 
comfortable with the code. But, as is the nature of reasearch, the 
different groups in this community have their own ideas on how to solve 
the as-yet unsolved problem. So there isn't a question here of someone
designing 'the' software that everyone uses, debugs, improves, etc. 
General methods of solution or improvements are published, and may
become 'industry standards' of a sort. But the actual code is designed 
and used 'in house', based on their own ideas. 

     In this context, it doesn't make sense to spend the time (and
the money -- we're not talking about corporations here!) to recode 
things in a modern language.

  


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links