Discussion:
Transcode to MPEG TS
Robrek V.
2007-02-01 12:44:47 UTC
Permalink
Hello All,
I am the latest addition to the transcode family of users and developers.
I am new to transcoding technology and would like to gain from the
expertise of all the wonderful folks out here who have put together this
great tool.
As a newbie - is there some background material i could read which
could prepare me in terms of:
1. Understanding the major challenges in Transcoding
2. How are the issues of coding to maintain AV sync in end result addressed.
3. any other tips that would help this novice.
As i move ahead - i would like to use transcode to write an
application that will successfully convert any given input into a
MPEG2/MPEG4 Transport Stream (is this already supported?).
Let us start a great discussion!
:)
regards,
Robrek
Francesco Romani
2007-02-01 19:10:12 UTC
Permalink
On Thu, 01 Feb 2007 18:14:47 +0530
Post by Robrek V.
Hello All,
Ack.
Quick answer: very nice to see new faces, unfortunately I've got a sudden
burst of (real life) work, so I can't give a full-featured answer (expect
it into weekend, anyway).

For the moment, I'd like to suggest to take a look at last transcode sources
(use CVS HEAD!).

[...]
Post by Robrek V.
As i move ahead - i would like to use transcode to write an
application that will successfully convert any given input into a
MPEG2/MPEG4 Transport Stream (is this already supported?).
That would be really nice ;)
--
Francesco Romani - Ikitt ['people always complain, no matther what you do']
IM contact : (email first, Antispam default deny!) icq://27-83-87-867
tiny homepage : http://fromani.exit1.org (see IDEAS if you want send code!)
known bugs : http://tcfoundry.hostme.it/mantis (EXPERIMENTAL)
Robrek V.
2007-02-02 06:01:41 UTC
Permalink
Post by Francesco Romani
Ack.
Quick answer: very nice to see new faces, unfortunately I've got a sudden
burst of (real life) work, so I can't give a full-featured answer (expect
it into weekend, anyway).
I really hope i can come up to speed on understanding the internals, so
as to make a meaningful contribution. I will be looking forward to your
email as much as the Arsenal Match over the weekend. :)
Post by Francesco Romani
For the moment, I'd like to suggest to take a look at last transcode sources
(use CVS HEAD!).
Yes. I did download the tar archive to get a hang of things. I am
getting my machine into shape for things to come. Read: installing
support libraries etc.
I also downloaded the htdocs which i found quite helpful so far.
Post by Francesco Romani
[...]
Post by Robrek V.
As i move ahead - i would like to use transcode to write an
application that will successfully convert any given input into a
MPEG2/MPEG4 Transport Stream (is this already supported?).
That would be really nice ;)
Fo shizzle!!
It would be interesting to initiate a discussion on the technical issues
in the processing chain involving the steps of:
file src -> container demux/splitter -> audio decode to raw /Video
decode to raw -> encode to mpeg layer2, mp3 Audio, encode to mpeg2 video
-> TS Mux with __perfect__ AVsync, utilising clock information resultant
from step 2 (container demux/splitter)
you,What say?
Francesco Romani
2007-02-02 08:02:30 UTC
Permalink
On Fri, 02 Feb 2007 11:31:41 +0530
"Robrek V." <***@gmail.com> wrote:

[...]
Post by Robrek V.
I really hope i can come up to speed on understanding the internals, so
as to make a meaningful contribution. I will be looking forward to your
email as much as the Arsenal Match over the weekend. :)
OK, let's be honest: internals of transcode aren't too good (at least
they aren't good too often). A major task for 1.1.0 release is to start
improving our situation by providing a better infrastructure:
better code, better organization, better STYLE, better, documentation.

Moreover, there are some areas in that transcode can be a very good example
of how things should _not_ be done :)
But ok, that's life, there is still hope and we're here to fix things
and make life better -- even at slow rate ;)
Post by Robrek V.
Yes. I did download the tar archive to get a hang of things. I am
getting my machine into shape for things to come. Read: installing
support libraries etc.
I also downloaded the htdocs which i found quite helpful so far.
Currently most of developing time is spent in export layer, by rewriting it
almost from scratch. I'm writing a couple of modules in those days, most
notably the libavcodec interface.
There is an almost-updated list of missing modules here:
http://fromani.exit1.org.

new modules API is quite stable but is not fixed in stone forever (..yet),
so it can still be changed if we're in need. On this topic, I'd like
to propose a couple of changes on next days.

For modules, main documentation is in docs/module-system-API.txt
Feel free to send any comment about that.
Post by Robrek V.
Post by Francesco Romani
Post by Robrek V.
As i move ahead - i would like to use transcode to write an
application that will successfully convert any given input into a
MPEG2/MPEG4 Transport Stream (is this already supported?).
That would be really nice ;)
Fo shizzle!!
It would be interesting to initiate a discussion on the technical issues
file src -> container demux/splitter -> audio decode to raw /Video
decode to raw -> encode to mpeg layer2, mp3 Audio, encode to mpeg2 video
-> TS Mux with __perfect__ AVsync, utilising clock information resultant
from step 2 (container demux/splitter)
you,What say?
I'll say that our import layer is very very crude and naif, so
achieving perfect A/V depends heavily by source goodness.

About the issues that comes into my mind, the very first are
- timestamp preserving on frame sources
- proper frame marking (based on above)
- dealing with missing/skipped/corrupted frames on source
- frame rate conversions
- filter reordering (if any) or delay (if any)
- proper muxing decision based on all of above

And much, much more.

I must say that there are a good amount of pieces that we just miss
in current codebase before to face this task (and others).

We plan to address those issues incrementally, and releasing 1.1.0
which will offer a MUCH saner base both for developers and users
is our first goal right now.

Bests,
--
Francesco Romani - Ikitt ['people always complain, no matther what you do']
IM contact : (email first, Antispam default deny!) icq://27-83-87-867
tiny homepage : http://fromani.exit1.org (see IDEAS if you want send code!)
known bugs : http://tcfoundry.hostme.it/mantis (EXPERIMENTAL)
Robrek V.
2007-02-02 12:16:52 UTC
Permalink
Post by Francesco Romani
For modules, main documentation is in docs/module-system-API.txt
Feel free to send any comment about that.
Sure.
Post by Francesco Romani
About the issues that comes into my mind, the very first are
- timestamp preserving on frame sources
Yes. If the frame_buffer_t - which from my initial understanding is the
basis for Audio/Video Access unit Storage:
This structure could have various attributes filled up based upon the
processing stage the frame is in?
For example - Clock information,derived from container format could be
added to the frame_buffer_t . This tagged information would be retained
through the post/pre/encoding and be used in AV sync activities in the
final stage.
How is Av sync achieved currently?
I, Makes any sense?
Post by Francesco Romani
- proper frame marking (based on above)
pray elaborate?
Post by Francesco Romani
- dealing with missing/skipped/corrupted frames on source
could this be handled by having video Stream as the Master clock?
Post by Francesco Romani
- frame rate conversions
- filter reordering (if any) or delay (if any)
- proper muxing decision based on all of above
hmmm...
Post by Francesco Romani
And much, much more.
Francesco Romani
2007-02-02 19:00:51 UTC
Permalink
On Fri, 02 Feb 2007 17:46:52 +0530
Post by Robrek V.
Yes. If the frame_buffer_t - which from my initial understanding is the
Kinda of. We're not so satisfied of such structur and I'm not satisfied
at all of our framebuffer handling.
Post by Robrek V.
This structure could have various attributes filled up based upon the
processing stage the frame is in?
Yes, depending of filters but also core is allowed to change some values.
Post by Robrek V.
For example - Clock information,derived from container format could be
added to the frame_buffer_t . This tagged information would be retained
through the post/pre/encoding and be used in AV sync activities in the
final stage.
How is Av sync achieved currently?
There is a few methods - and that already isn't optimal - (see -M option),
the default anyway is less or more ``get frames in decoded order and hope
for best''.
Post by Robrek V.
I, Makes any sense?
Your proposal makes definitively sense, but require some heavy core changes,
and most important a better (well, in fact just `a') demuxing layer.
Post by Robrek V.
Post by Francesco Romani
- proper frame marking (based on above)
pray elaborate?
transcode aims to support most formats as possible for I/O; some of them
can't have notion of timestamping at all (group of images, plain YUV streams,
maybe YUV4MPEG2 too IIRC), so a more general kind of `virtual' timestamping
is needed.
Post by Robrek V.
Post by Francesco Romani
- dealing with missing/skipped/corrupted frames on source
could this be handled by having video Stream as the Master clock?
Yes, AFAIK yes.

Bests,
--
Francesco Romani - Ikitt ['people always complain, no matther what you do']
IM contact : (email first, Antispam default deny!) icq://27-83-87-867
tiny homepage : http://fromani.exit1.org (see IDEAS if you want send code!)
known bugs : http://tcfoundry.hostme.it/mantis (EXPERIMENTAL)
Robrek V.
2007-02-05 08:15:13 UTC
Permalink
Post by Francesco Romani
Kinda of. We're not so satisfied of such structur and I'm not satisfied
at all of our framebuffer handling.
There are quite a few interesting buffer architectures used in some open
source frameworks that could be looked at.
Post by Francesco Romani
Post by Robrek V.
This structure could have various attributes filled up based upon the
processing stage the frame is in?
Yes, depending of filters but also core is allowed to change some values.
True. Cases when the Core can update these values and the Filters
themselves update needs to be sorted out.
Post by Francesco Romani
Post by Robrek V.
For example - Clock information,derived from container format could be
added to the frame_buffer_t . This tagged information would be retained
through the post/pre/encoding and be used in AV sync activities in the
final stage.
How is Av sync achieved currently?
There is a few methods - and that already isn't optimal - (see -M option),
the default anyway is less or more ``get frames in decoded order and hope
for best''.
I am doing just this.
Post by Francesco Romani
transcode aims to support most formats as possible for I/O; some of them
can't have notion of timestamping at all (group of images, plain YUV streams,
maybe YUV4MPEG2 too IIRC), so a more general kind of `virtual' timestamping
is needed.
According to what i read, YUV4MPEG2 seems to contain uncompressed YUV
"Video" data meant for MPEG Encoding.
Timestamping in case of transcoding would be relevant for AV sync only
right?


Furthermore -
I am trying to get a hang of the whole file format demux , decode ,
encode operation chain.
My interest is in trying to understand:
1. How are the time stamps applciable to the Audio/Video Data wrt a
common clock stored
2. How they are passed through the filters
3. How they are used based upon the target transcode container format -
although the Audio/Video encoding format might change in the output
file, the timestamps (relative between Audio/Video decode, presentation
would be retained to achieve AV sync)

guidance from the community members?
Francesco Romani
2007-02-05 08:43:27 UTC
Permalink
Post by Robrek V.
There are quite a few interesting buffer architectures used in some open
source frameworks that could be looked at.
Indeed. I'll look how mplayer/mencoder and virtualdub (I've spot some
interesting thoughts from virtualdub's author).
Avisynth could be interesting too.
Post by Robrek V.
transcode aims to support most formats as possible for I/O; some of them
Post by Francesco Romani
can't have notion of timestamping at all (group of images, plain YUV
streams,
Post by Francesco Romani
maybe YUV4MPEG2 too IIRC), so a more general kind of `virtual'
timestamping
Post by Francesco Romani
is needed.
According to what i read, YUV4MPEG2 seems to contain uncompressed YUV
"Video" data meant for MPEG Encoding.
Yep. This format was invented from mjpegtools people IIRC.

Timestamping in case of transcoding would be relevant for AV sync only
Post by Robrek V.
right?
AFAIK, yes. It could help also for framerate conversions.

[...]
Post by Robrek V.
guidance from the community members?
Not yet sorry (job calls, and to be honest I still have some doubts on this
topic). I'd like to suggest to take a look at
mplayer/mencoder docs, there are some interesting informations.


Bests,
--
Francesco Romani
Continue reading on narkive:
Loading...