Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Video Editing Soft & Formats



On Fri, 20 Jul 2007, Dave M G wrote:

I will have to agree to disagree with anyone who feels, as Josh and others have expressed, that the command line can suffice for graphics related tasks. Even if it's simple conversions. And I'm going to just leave it at that.

Sometimes the command line not only has to suffice, it's the only option. How do you imagine YouTube could work if every uploaded video had to have its conversion done by a human in front of a computer?

And having a command line does not prevent DIY types from doing stuff
visually; it enables them to do it in the way we need. Let me give you
an example of a "visual" problem where having command-line capabilities
made the solution much easier.

My (and Bryan Buecking's) company, Starling Software, has a product
called RSWF which is used for taking apart, analyzing, modifying, and
rebuilding SWF (often called "Flash") files. (It generates content, too,
but once a chunk of content is generated, you can think of it as just
another piece of content, no different from one read in from another SWF
file.)

Now one of the problems we have to deal with is whether or not the new
file we build looks (and works, for interactive ones) as it's supposed
to look and work. This is a decision that can only human can make, at
least the first time. We use stored copies of the correct output (or
analyses of the correct output) for repeated testing, but when we change
the system, the output or analysis might change. It might look exactly
the same, even though the file itself is different, or we might have
made a change where we expect the result to look or act differently.

The testing tools we've built are all command-line based, but allow
us, on request, to use command-line tools to pop up visual displays of
the old and new versions of the files when we determine that we need
a human check. (Basically, this when we expect that the output file
or analysis thereof will be different from the previously expected
output. Only the programmer working on a particular feature or bug knows
whether the output might be changed.) If the programmer requests a
visual comparsion, he gets it. If it doesn't the automated system takes
care of reporting "as expected" or "not as expected."

For other projects, we do similar things with mplayer.

We might have been able to automate this if we were using purely GUI
applications, but it would have been much harder.

A great thing about the Unix philosophy of working is that, since it
takes more of a toolbox approach, you can integrate things unanticipated
by the designers of a program into your toolkit.

I can even envisage situations in more conventional "visual"
applications, such as generating edit decision lists and color timing,
where this sort of capability would be useful. What about a story such
as this?

During the editing process, a color timer is given all of the source
material and an EDL for the initial rough cut. He writes a script to
go through all of the scenes in the EDL and look for the ones where a
spectral analysis of the scene indicates an obviously odd balance, and
fixes those first, adding this information to a, hmm, call it a "timing
list." He then sends that to the producer, who's already been watching
the rough cut, as well as to the editor and director, so they see the
worst of the material fixed (probably in a rough way) right away. He
then starts working through the EDL sceen by sceen, fixing each one. A
couple of days later, he gets a new EDL, and diffs it with the old one
to see what the new scenes are. He can use his analysis tool again to
check those one first, do rough fixes, send the fixes back, and carries
on. He gets a complaint from the editor who can't make a decision about
which take of a certain scene to use because one is corrected and the
others are not. So he goes back and finds all of the source material
from that location on that day and notices that for several different
scenes he had to use a very similar kind of correction; he extracts out
the common component he realises was necessary for the light in that
location on that day, applies it to all of those scenes, does a quick
check that the result is adequate for the purpose, and sends back a base
correction to be applied to all scenes in that location and day, as well
as the list of extra corrections to be applied to specific scenes.

Being able to use basic tools such as sort and diff, and writing (or
getting someone to write) custom tools and "little languages" to help
manipulate the data in question, can save enormous amounts of time and
money in processes like this. And it's not at all as hard to learn as
many people think it is. Keep in mind that secretaries at Bell Labs
in the 1970s were happily using ed and nroff to type up letters and
documents.

cjs
--
Curt Sampson       <cjs@example.com>        +81 90 7737 2974
Mobile sites and software consulting: http://www.starling-software.com


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links