Glynn:
Ultimately, the only sane approach to generating animation is to
> write out a sequence of image files then use a dedicated encoding
> program such as ffmpeg. Video encoding is a complex task with many
> parameters. Any built-in encoding functionality is inevitably going
> to do a half-baked job.
NVIZ animations will almost always be of similar ilk, so it is
reasonable to assume we can derive and suggest some good defaults to use
for the encoding, and include an optional button to automatically encode
using those defaults. (e.g. value quality over encoding time, FPS, test
XviD "cartoon mode" hinting, etc) I don't mind if the actual encoding is
done by a wrapper script calling mencoder (or whatever), or if it is
simple enough to use the FFMPEG C api to drive it. I do think we should
offer some sort of "encode on-the-fly" button though.
Maris Nartiss wrote:
IMHO we should NOT add a yet another dependency for GRASS especialy
that duplicates already working utilities with not-as-good in-house
wonder. We do not have enough manpower to implement YetAnother video
encoder.
I agree with both points individually [limit new deps when reasonable +
don't reinvent the wheel], but together they conflict-- we have an
optional ffmpeg dependancy which is used to do the encoding. We're
simply connecting to their encoding API, not rewriting a new enocder all
of our own. To do automatic encoding (which I think is a very nice
feature, as it can be a complex thing), we either have to 1) depend on
something new or 2) implement YetAnother video encoder.
And to be clear, we're not actually implementing our own encoder, we are
simply calling avcodec_encode_video() and writing the result to disk.
FWIW, I'm not a big fan of "manpower" arguements -- if someone qualified
and motivated can do a job [in this case Bob], we shouldn't stop that,
as long is it is done in a maintainable way. It's up to them to work on
it or not, and we can't expect them to work on something else instead.
i.e. open source workforces are not faced with (a) or (b) choices of
what to work on, and you can't force a volunteer to work on project (b)
when they really are only interested in project (a).
As Glynn notes, creating animation from image sequence is better.
Yes, you can get better results _if_ you have lots of experience in
running and tuning a video encoder, which most users won't have.
Otherwise the "better" solution is the one which will lead to a
successful result without hours of study + trial & error.
(wiki/Movies is a huge help of course, thanks)
One of best things of encoding image series into movie is that You can
play around with various video encoding options to find best
quality/size ratio for Your artwork without need to recreate
(rerender) every scene in Your animation. Stacking together all PNG's
in one movie with mplayer's encoder is described in GRASS wiki [1]
with basic options that You will need. Mplayers encoder (mencoder)
offers many other options - lot more than may offer any GRASS encoding
tool.
Sure, for advanced users, but at the same time we should offer
"good-enough" defaults for the casual user who wants a quick result.
[1] http://grass.gdf-hannover.de/wiki/Movies
[I added your mencoder example to the NVIZ keyframe animation help page
as well]
The hardest part of creating animations is getting those PNG frames to
disk - controling camera and saving result. When I last time had to
make movie in nviz, I ended with TCL script controlling camera movment
and orientation, as, unfortunately, I find built-in nviz tools hard to
use
All are welcome to improve the d.nviz module or rewrite a replacement
using WxPython. (eg use a vector line to set the camera route)
Also I think the nviz keyframe panel's spline interpolator needs a
wider smoothing window, i.e. don't treat each keyframe as an endpoint,
but take next keyframe on either side into account for a smooth
transition versus current bunny hops.
IMHO it would be better to improve nviz animation tool (one and not
many) and leave video encoding to other apps.
I don't think they are mutually exclusive options, and both could be
interesting projects to different people. It is my hope that we _do_
leave the bulk of the encoding to FFMEG, just we use C as a more direct
wrapper, versus UNIX shell scripting. Currently we set variables for
the bit-rate and codec, then feed it a stream. Nothing too low level.
It's just 2 lines of code: lib/ogsf/gsd_img_ppm.c
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, DEFAULT_BUFFER_SIZE, picture);
fwrite(outbuf, 1, out_size, fmpg);
Hamish