文档搜索 > Fromzhou.wenpengatrdnet.f

Fromzhou.wenpengatrdnet.f

From zhou.wenpeng at rdnet.fi Thu Nov 1 09:04:26 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Thu, 1 Nov 2012 10:04:26 +0200
Subject: [Libav-user] I need some C++ source code example for
MJpeg streaming
In-Reply-To: <201210300841.58753.info@denisgottardello.it>
References: <00a201cdb624$6731b080$35951180$@rdnet.fi>
<201210300841.58753.info@denisgottardello.it>
Message-ID: <002b01cdb807$83790240$8a6b06c0$@rdnet.fi>



> -----Original Message-----
> From: libav-user-bounces at ffmpeg.org [mailto:libav-user-
> bounces at ffmpeg.org] On Behalf Of Denis
> Sent: 30. lokakuuta 2012 9:42
> To: This list is about using libavcodec, libavformat, libavutil,
libavdevice and
> libavfilter.
> Subject: Re: [Libav-user] I need some C++ source code example for MJpeg
> streaming
>
> In data luned? 29 ottobre 2012 23:26:12, Wenpeng Zhou ha scritto:
> > But I do not know how to program MJpeg stream with C++.
> > Who can provide some examples?
>
>
> If you want to send a stream across the network in mjpeg format is not
> necessary to use ffmpeg. The only thing that you must do is send jpg
images
> in sequence. You can watch the stream with firefox for example.
>
>
HI Denis,

Thanks for your reply!
Firstly, I will use ffmpeg to capture live video, then http stream the live
video in the format of mjpeg.
How can I do? Does ffmpeg has some APIs to do it?


From info at denisgottardello.it Thu Nov 1 09:19:07 2012
From: info at denisgottardello.it (Denis)
Date: Thu, 1 Nov 2012 09:19:07 +0100
Subject: [Libav-user]
=?iso-8859-1?q?I_need_some_C++_source_code_example_f?=
=?iso-8859-1?q?or_MJpeg=09streaming?=
In-Reply-To: <002b01cdb807$83790240$8a6b06c0$@rdnet.fi>
References: <00a201cdb624$6731b080$35951180$@rdnet.fi>
<201210300841.58753.info@denisgottardello.it>
<002b01cdb807$83790240$8a6b06c0$@rdnet.fi>
Message-ID: <201211010919.08003.info@denisgottardello.it>

In data gioved? 01 novembre 2012 09:04:26, Wenpeng Zhou ha scritto:
> HI Denis,
>
> Thanks for your reply!
> Firstly, I will use ffmpeg to capture live video, then http stream the live
> video in the format of mjpeg.
> How can I do? Does ffmpeg has some APIs to do it?


I think yes but I use OpenCv for capture from camera. For send across the
network I use ffmpeg to produce h264 or mpg stream. For mjpg I only send
directly a stream of jpg images.
I think you must extract the avframe or avimage from the stream, convert it in
jpg and then send.

--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php

From zhou.wenpeng at rdnet.fi Thu Nov 1 09:27:09 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Thu, 1 Nov 2012 10:27:09 +0200
Subject: [Libav-user] I need some C++ source code example for
MJpeg streaming
In-Reply-To: <201211010919.08003.info@denisgottardello.it>
References: <00a201cdb624$6731b080$35951180$@rdnet.fi> <201210300841.58753.info@denisgottardello.it> <002b01cdb807$83790240$8a6b06c0$@rdnet.fi>
<201211010919.08003.info@denisgottardello.it>
Message-ID: <002d01cdb80a$afdd44b0$0f97ce10$@rdnet.fi>

> > HI Denis,
> >
> > Thanks for your reply!
> > Firstly, I will use ffmpeg to capture live video, then http stream the
> > live video in the format of mjpeg.
> > How can I do? Does ffmpeg has some APIs to do it?
>
>
> I think yes but I use OpenCv for capture from camera. For send across the
> network I use ffmpeg to produce h264 or mpg stream. For mjpg I only send
> directly a stream of jpg images.
> I think you must extract the avframe or avimage from the stream, convert
it
> in jpg and then send.
>
Hi Denis,
Could you please give some code example?
I also use OPenCV to capture video. I save the jpg image in the memory. But
I don't know how to stream with HTTP. How can you send a stream of jpg
images?
When you use ffmpeg to produce H264 from OpenCv videoCapture and stream it,
how long is the latency?


From info at denisgottardello.it Thu Nov 1 09:42:02 2012
From: info at denisgottardello.it (Denis)
Date: Thu, 1 Nov 2012 09:42:02 +0100
Subject: [Libav-user]
=?iso-8859-1?q?I_need_some_C++_source_code_example_f?=
=?iso-8859-1?q?or_MJpeg=09streaming?=
In-Reply-To: <002d01cdb80a$afdd44b0$0f97ce10$@rdnet.fi>
References: <00a201cdb624$6731b080$35951180$@rdnet.fi>
<201211010919.08003.info@denisgottardello.it>
<002d01cdb80a$afdd44b0$0f97ce10$@rdnet.fi>
Message-ID: <201211010942.03108.info@denisgottardello.it>

In data gioved? 01 novembre 2012 09:27:09, Wenpeng Zhou ha scritto:
> Hi Denis,
> Could you please give some code example?
> I also use OPenCV to capture video. I save the jpg image in the memory. But
> I don't know how to stream with HTTP.

I wrote a http web server component in c++ (using the Qt libraries. Qt is
needed) for send stream. I use this component for exchange data between server
and clients and for send stream.

> How can you send a stream of jpg
> images?
> When you use ffmpeg to produce H264 from OpenCv videoCapture and stream it,
> how long is the latency?


mjpg: 0 seconds
msmpeg4: 1 - 2 seconds
mpeg4: 3 - 5 seconds.
h264: 5 - 8 seconds.

h264 is the best in compression but not for latency.
If you want you can try my application (You can find it in my website, its name
is DomusBoss) for test the different solutions.

--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/d0adab91/attachment.html>

From tomas.hardin at codemill.se Thu Nov 1 09:47:08 2012
From: tomas.hardin at codemill.se (Tomas =?ISO-8859-1?Q?H=E4rdin?=)
Date: Thu, 01 Nov 2012 09:47:08 +0100
Subject: [Libav-user] pass-through "re-containering" with libavXXXX
In-Reply-To: <47F362FE-27D7-4798-ACCD-A3EB0C69FE69@gmail.com>
References: <47F362FE-27D7-4798-ACCD-A3EB0C69FE69@gmail.com>
Message-ID: <1351759628.2390.2.camel@Aspire-5820TG>

On Tue, 2012-10-30 at 23:15 +0100, "Ren? J.V. Bertin" wrote:
> Hello list,
>
> Can someone point me to a bare-bones (possibly pseudo-code) implementation using libavformat, libavcodec etc. of a command like
>
> ffmpeg -i somekindofIncomplete.mp4 -vcodec copy wellFormed.m4v

See doc/examples/demuxing.c and muxing.c

For a hackish way, assuming you want a minimal binary: Take ffmpeg.c,
strip it down until it does only what you want and ./configure
--disable-everything --enable-demuxer=mov --enable-muxer=m4v

/Tomas
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/9c8edcb4/attachment.asc>

From info at denisgottardello.it Thu Nov 1 09:48:28 2012
From: info at denisgottardello.it (Denis)
Date: Thu, 1 Nov 2012 09:48:28 +0100
Subject: [Libav-user]
=?iso-8859-1?q?I_need_some_C++_source_code_example_f?=
=?iso-8859-1?q?or_MJpeg=09streaming?=
In-Reply-To: <002d01cdb80a$afdd44b0$0f97ce10$@rdnet.fi>
References: <00a201cdb624$6731b080$35951180$@rdnet.fi>
<201211010919.08003.info@denisgottardello.it>
<002d01cdb80a$afdd44b0$0f97ce10$@rdnet.fi>
Message-ID: <201211010948.28765.info@denisgottardello.it>

In data gioved? 01 novembre 2012 09:27:09, Wenpeng Zhou ha scritto:
> When you use ffmpeg to produce H264 from OpenCv videoCapture and stream it,
> how long is the latency?


I think a very interesting thing to know is if the latency of h264 is different
using h264 directly without ffmpeg.

--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php

From rjvbertin at gmail.com Thu Nov 1 12:47:08 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Thu, 1 Nov 2012 12:47:08 +0100
Subject: [Libav-user] pass-through "re-containering" with libavXXXX
In-Reply-To: <1351759628.2390.2.camel@Aspire-5820TG>
References: <47F362FE-27D7-4798-ACCD-A3EB0C69FE69@gmail.com>
<1351759628.2390.2.camel@Aspire-5820TG>
Message-ID: <D66A6EA6-C2D9-4711-9E6E-8E0DA86E9647@gmail.com>

Good idea about muxing.c, hadn't thought about that. Did look at ffmpeg though, and quickly got lost trying to understand what happens when passing -vcodec copy on the command line :-/

R.

On Nov 01, 2012, at 09:47, Tomas H?rdin wrote:

> On Tue, 2012-10-30 at 23:15 +0100, "Ren? J.V. Bertin" wrote:
>> Hello list,
>>
>> Can someone point me to a bare-bones (possibly pseudo-code) implementation using libavformat, libavcodec etc. of a command like
>>
>> ffmpeg -i somekindofIncomplete.mp4 -vcodec copy wellFormed.m4v
>
> See doc/examples/demuxing.c and muxing.c
>
> For a hackish way, assuming you want a minimal binary: Take ffmpeg.c,
> strip it down until it does only what you want and ./configure
> --disable-everything --enable-demuxer=mov --enable-muxer=m4v
>
> /Tomas


From aw.wisch at googlemail.com Thu Nov 1 15:34:33 2012
From: aw.wisch at googlemail.com (Alexander Wischnewski)
Date: Thu, 1 Nov 2012 22:34:33 +0800
Subject: [Libav-user] h264 decoder problem with rtsp stream
Message-ID: <CAJg_av-R7bKWSj_2MO1=cM3phYb78vChLKgC-pgZxSg8N80GdA@mail.gmail.com>

Hello,

im using ffmpeg on iOS to decode a h264 rtsp stream of an ip camera, im
connection to the stream via http tunneling and it is working so far but
the decoder has different problems with some frames, resulting in bad image
quality. Im new to ffmpeg and video decoding in general so i have no guess
where the problem might be. I mostly used the iFrameExtractor app as an
example to write my own code which is working very well for cameras with
mjpeg stream.

I ve heard that normally you have to extract the SPS / PPS of the rtsp
frames and send them to the decoder, on the other side some people writing
that ffmpeg is extracting them itself. this is the output I'm recieving in
xcode:

[h264 @ 0xba7bc00] sps_id (32) out of range

[h264 @ 0xba7bc00] sps_id (32) out of range

[h264 @ 0xba7bc00] sps_id (32) out of range

[h264 @ 0x9ab5e00] max_analyze_duration 100000 reached at 120000

[h264 @ 0x9ab5e00] decoding for stream 0 failed

[h264 @ 0x9ab5e00] Estimating duration from bitrate, this may be inaccurate

[swscaler @ 0x9b2f000] No accelerated colorspace conversion found from
yuv420p to rgb24.

[h264 @ 0x9b2e400] sps_id (32) out of range

[h264 @ 0x9b2e400] sps_id (32) out of range

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e800] sps_id out of range

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0x9b2e400] Partitioned H.264 support is incomplete

[h264 @ 0x9b2e400] Partitioned H.264 support is incomplete

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9a9a800] sps_id out of range

[h264 @ 0xba7bc00] chroma_format_idc 32 is illegal

[h264 @ 0x9a9a800] chroma_format_idc 32 is illegal

[h264 @ 0x9a9a800] chroma_format_idc 32 is illegal

[h264 @ 0x9b2ec00] Partitioned H.264 support is incomplete

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e400] sps_id out of range

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e000] sps_id out of range

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e400] sps_id out of range

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] illegal bit depth value (28, 8)

[h264 @ 0x9b2e000] illegal bit depth value (28, 8)

[h264 @ 0x9b2e000] illegal bit depth value (28, 8)

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] illegal bit depth value (8, 50)

[h264 @ 0x9b2e800] illegal bit depth value (8, 50)

[h264 @ 0x9b2e800] illegal bit depth value (8, 50)

[h264 @ 0x9b2e800] Partitioned H.264 support is incomplete

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e000] sps_id out of range

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] chroma_format_idc 5 is illegal

[h264 @ 0x9b2e400] chroma_format_idc 5 is illegal

[h264 @ 0x9b2e400] chroma_format_idc 5 is illegal

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e800] sps_id out of range

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e800] sps_id out of range

[h264 @ 0x9b2e400] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e400] decode_slice_header error

[h264 @ 0x9b2e400] no frame!

[h264 @ 0x9b2e800] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e800] decode_slice_header error

[h264 @ 0x9b2e800] no frame!

[h264 @ 0x9b2ec00] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2ec00] decode_slice_header error

[h264 @ 0x9b2ec00] no frame!

[h264 @ 0x9a9a800] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9a9a800] decode_slice_header error

[h264 @ 0x9a9a800] no frame!

[h264 @ 0x9b2e000] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e000] decode_slice_header error

[h264 @ 0x9b2e000] no frame!

[h264 @ 0x9b2e400] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e400] decode_slice_header error

[h264 @ 0x9b2e400] no frame!

[h264 @ 0x9b2e800] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e800] decode_slice_header error

[h264 @ 0x9b2e800] no frame!

[h264 @ 0x9b2ec00] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2ec00] decode_slice_header error

[h264 @ 0x9b2ec00] no frame!

[h264 @ 0x9a9a800] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9a9a800] decode_slice_header error

[h264 @ 0x9a9a800] no frame!

[h264 @ 0xba7bc00] illegal aspect ratio

[h264 @ 0x9b2e000] illegal aspect ratio

[h264 @ 0x9b2e000] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e000] decode_slice_header error

[h264 @ 0x9b2e000] no frame!

[h264 @ 0x9b2e400] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e400] decode_slice_header error

[h264 @ 0x9b2e400] no frame!

[h264 @ 0x9b2e800] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2e800] decode_slice_header error

[h264 @ 0x9b2e800] no frame!

[h264 @ 0x9b2ec00] Width/height/bit depth/chroma idc changing with threads
is not implemented. Update your FFmpeg version to the newest one from Git.
If the problem still occurs, it means that your file has a feature which
has not been implemented.

[h264 @ 0x9b2ec00] decode_slice_header error

[h264 @ 0x9b2ec00] no frame!

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] non-existing PPS referenced

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2e400] sps_id out of range

[h264 @ 0xba7bc00] sps_id out of range

[h264 @ 0x9b2ec00] sps_id out of range

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0xba7bc00] missing picture in access unit with size 16

[h264 @ 0x9b2e800] Partitioned H.264 support is incomplete

My ffmpeg build options (i386 simulator):

--disable-shared --disable-doc --disable-devices --disable-ffmpeg
--disable-ffplay --disable-ffprobe --disable-ffserver --disable-protocols
--enable-protocol=http --enable-protocol=rtp --enable-protocol=udp
--disable-filters --disable-encoders --disable-decoders
--enable-decoder=mjpeg --enable-decoder=h264 --disable-demuxers
--enable-demuxer=mjpeg --enable-demuxer=h264 --enable-demuxer=rtsp
--disable-demuxer=mov --disable-muxers --disable-parsers
--enable-parser=mjpeg --enable-parser=h264 --enable-neon --enable-pthreads
--enable-swscale --enable-cross-compile --arch=i386 --target-os=darwin
--cpu=i386 --prefix=compiled/i386 --enable-pic --enable-static

Pseudo code in the app lokks like this:

av_register_all();

avformat_network_init();

pFormatCtx = avformat_alloc_context();

avInformat = av_find_input_format("h264");

avformat_open_input(&pFormatCtx, moviePath, avInformat, &options)

avformat_find_stream_info(pFormatCtx, NULL)

pCodecCtx=pFormatCtx->streams[videoStream]->codec;

pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

AVDictionary* codecOpts = NULL;

avcodec_open2(pCodecCtx, pCodec, &codecOpts)

.....

Im using the latest git head of ffmpeg and already tested with turned off
pthreading in my build configuration but the output is still the same, vlc
cant access the http tunnel but the rtsp directly works but not with ffmpeg.

Hope someone can give me a hint

Best regards,
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/b44deb17/attachment.html>

From joshi_kriya at yahoo.com Thu Nov 1 18:54:56 2012
From: joshi_kriya at yahoo.com (Kriya Joshi)
Date: Thu, 1 Nov 2012 10:54:56 -0700 (PDT)
Subject: [Libav-user] Use ffmpeg for iphone app developement
Message-ID: <1351792496.75742.YahooMailNeo@web125002.mail.ne1.yahoo.com>

Hello

??? ??? ??? I am working as iPhone developer. I am creating 1 app in which recorded video frame images need to be extracted and stored somewhere in my application.
? ? ? ? ? ? By Google it, I get to know about ffmpeg. I downloaded code for this from? "https://github.com/lajos/iFrameExtractor" but it does not download ffmpeg files. So, I downloaded those file from "http://ffmpeg.org/download.html#releases".

??????????? Now what I want for my iPhone app development is "libavcodec.a" file but the files which I downloaded? from ffmpeg website has different files and folders. I also got a folder named "libavcodec".

??????????? Can u plz guide me how can I generate this library/framework type files from such folder which I downloaded? Please help me for this. I am stuck with it. Thank u in advance.

??????????


Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/297f51f9/attachment.html>

From drodrigues at wit-software.com Thu Nov 1 20:45:46 2012
From: drodrigues at wit-software.com (David Rodrigues)
Date: Thu, 1 Nov 2012 19:45:46 +0000
Subject: [Libav-user] Use ffmpeg for iphone app developement
In-Reply-To: <1351792496.75742.YahooMailNeo@web125002.mail.ne1.yahoo.com>
References: <1351792496.75742.YahooMailNeo@web125002.mail.ne1.yahoo.com>
Message-ID: <14388FA0-88E2-43E4-AB63-AB9A69EA04CD@wit-software.com>

Hi,

i've written a script to compile the FFmpeg for iOS. You can check it here:

https://github.com/dmcrodrigues/FFmpeg-iOS-build-script

Hope it helps!

David

A 01/11/2012, ?s 17:54, Kriya Joshi <joshi_kriya at yahoo.com> escreveu:

> Hello
>
> I am working as iPhone developer. I am creating 1 app in which recorded video frame images need to be extracted and stored somewhere in my application.
> By Google it, I get to know about ffmpeg. I downloaded code for this from "https://github.com/lajos/iFrameExtractor" but it does not download ffmpeg files. So, I downloaded those file from "http://ffmpeg.org/download.html#releases".
> Now what I want for my iPhone app development is "libavcodec.a" file but the files which I downloaded from ffmpeg website has different files and folders. I also got a folder named "libavcodec".
> Can u plz guide me how can I generate this library/framework type files from such folder which I downloaded? Please help me for this. I am stuck with it. Thank u in advance.
>
>
> Thanks
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/6ca62925/attachment.html>

From lumpings at web.de Thu Nov 1 21:05:34 2012
From: lumpings at web.de (Lumping 4Ever)
Date: Thu, 1 Nov 2012 21:05:34 +0100 (CET)
Subject: [Libav-user] FFMPEG - Undefined reference to 'av_...()'
Message-ID: <trinity-8992b63c-6e2f-4047-bc7c-91ab394cd5ad-1351800334125@3capp-webde-bs19>

An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/8ea92947/attachment.html>

From nicolas.george at normalesup.org Thu Nov 1 22:09:34 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Thu, 1 Nov 2012 22:09:34 +0100
Subject: [Libav-user] Use ffmpeg for iphone app developement
In-Reply-To: <1351792496.75742.YahooMailNeo@web125002.mail.ne1.yahoo.com>
References: <1351792496.75742.YahooMailNeo@web125002.mail.ne1.yahoo.com>
Message-ID: <20121101210934.GA7434@phare.normalesup.org>

Le primidi 11 brumaire, an CCXXI, Kriya Joshi a ?crit?:
> ??????????? Now what I want for my iPhone app development is
> "libavcodec.a" file but the files which I downloaded? from ffmpeg website
> has different files and folders. I also got a folder named "libavcodec".

This is not an answer to your question per se, but it can avoid you trouble
later:

Are you aware that there is are a few strong legal constraints to meet if
you want to distribute applications based on ffmpeg or its libraries?

I ask because, as far as I know, these constraints are very hard to meet for
an iphone application, due to apple's ultra-closed policy.

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121101/dc535686/attachment.asc>

From cehoyos at ag.or.at Fri Nov 2 13:40:07 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Fri, 2 Nov 2012 12:40:07 +0000 (UTC)
Subject: [Libav-user]
=?utf-8?q?FFMPEG_-_Undefined_reference_to_=27av=5F?=
=?utf-8?b?Li4uKCkn?=
References: <trinity-8992b63c-6e2f-4047-bc7c-91ab394cd5ad-1351800334125@3capp-webde-bs19>
Message-ID: <loom.20121102T133625-307@post.gmane.org>

Lumping 4Ever <lumpings at ...> writes:

> ./configure --disable-yasm --enable-shared

Unrelated to your question:
Why do you use --disable-yasm ?
(It is a bad idea.)

> I got the library files (.a)

Why are you using --enable-shared (above) if you want
to use the .a files?

[...]

> extern "C"{?? #include "libavcodec/avcodec.h"?
>? #include "libavutil/mathematics.h"}
>
> extern "C" int SDL_main(int argc, char *argv[]);
>
> int main(int argc, char *argv[]){
> ??? av_register_all();

I don't think this is supposed to work,
av_register_all() is defined in libavformat/avformat.h
which you do not include.

Your mailer may be completely broken, please see how
your message arrives:
http://ffmpeg.org/pipermail/libav-user/2012-November/003024.html
Consider reading http://ffmpeg.org/contact.html again.

Carl Eugen


From andrea3000 at email.it Sat Nov 3 10:25:37 2012
From: andrea3000 at email.it (Andrea3000)
Date: Sat, 3 Nov 2012 10:25:37 +0100
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
Message-ID: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>

Hi all,

I'm developing an application for playing media contents thanks to FFmpeg API.
Those media contents come from a custom input which is handled with FFmpeg's custom I/O callbacks and it works fine.

The problem I need to address is that the media stream can sometimes dynamically change the number of video and/or audio streams, the stream parameters (such as bitrate)
and it can also change audio and/or video codecs.

When such a change is going to happen, the custom input (through read_packet) provides the remaining packets of the stream before the change, than notify me of the presence of a change and than continues to feed the decoder with new packets coming from the stream after the change.
So I know exaclty when this discontinuity occurs but I don't know how to handle it.

What do I need to update in the FFmpeg structure to handle the change? Only AVFormatContext?
Is there a way to handle it?

Thank you
Regards

Andrea

From cehoyos at ag.or.at Sat Nov 3 16:50:31 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Sat, 3 Nov 2012 15:50:31 +0000 (UTC)
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
References: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>
Message-ID: <loom.20121103T164756-591@post.gmane.org>

Andrea3000 <andrea3000 at ...> writes:

> What do I need to update in the FFmpeg structure to
> handle the change?

I suggest you first test ffmpeg (the command line application)
or ffplay (or MPlayer). If they do not support the changing of
number of audio / video streams or parameters (not bitrate,
since that changes all the time for real-world codecs),
consider reporting it here, on ffmpeg-users or on trac.

If it works, you only have to make updates in your
application.

Codec change is generally unsupported afaik, if you have
such a sample, please provide it!

Carl Eugen


From info at denisgottardello.it Sun Nov 4 09:30:42 2012
From: info at denisgottardello.it (Denis)
Date: Sun, 4 Nov 2012 09:30:42 +0100
Subject: [Libav-user] muxing.c, memory leak
In-Reply-To: <201210280856.02282.info@denisgottardello.it>
References: <201210280856.02282.info@denisgottardello.it>
Message-ID: <201211040930.42103.info@denisgottardello.it>

In data domenica 28 ottobre 2012 08:56:02, Denis ha scritto:
> Using valgrind I obtain a memory leak in this row:
>
>
> "pAVStream= add_video_stream(pAVFormatContext, &pAVCodec,
> pAVOutputFormat->video_codec);" if (posix_memalign(&ptr,ALIGN,size))
> av_mallocz in mem.c:95
>
>
> Can anybody verifies if
> is true?
>
>
> I have an application that uses ffmpeg. In 2 weeks the application uses all
> the memory and now I suspect that the problem is in the wrong use of
> ffmpeg libraries.


Anybody can help me?

--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php

From cehoyos at ag.or.at Sun Nov 4 14:26:56 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Sun, 4 Nov 2012 13:26:56 +0000 (UTC)
Subject: [Libav-user] muxing.c, memory leak
References: <201210280856.02282.info@denisgottardello.it>
Message-ID: <loom.20121104T142605-596@post.gmane.org>

Denis <info at ...> writes:

> Using valgrind I obtain a memory leak in this row:
>
> "pAVStream= add_video_stream(pAVFormatContext, &pAVCodec,
> pAVOutputFormat->video_codec);"
> if (posix_memalign(&ptr,ALIGN,size))
> av_mallocz in mem.c:95

Could you post the valgrind output?

> Can anybody verifies if
> is true?
>
> I have an application that uses ffmpeg. In 2 weeks the
> application uses all the memory and now I suspect that
> the problem is in the wrong use of ffmpeg libraries.

You could run valgrind on your application.

Carl Eugen


From andrea3000 at email.it Sun Nov 4 16:08:59 2012
From: andrea3000 at email.it (Andrea3000)
Date: Sun, 4 Nov 2012 16:08:59 +0100
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
In-Reply-To: <loom.20121103T164756-591@post.gmane.org>
References: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>
<loom.20121103T164756-591@post.gmane.org>
Message-ID: <B3D40133-7E0C-40B3-9A5A-A38E15FFDBBB@email.it>

> Andrea3000 <andrea3000 at ...> writes:
>
>> What do I need to update in the FFmpeg structure to
>> handle the change?
>
> I suggest you first test ffmpeg (the command line application)
> or ffplay (or MPlayer). If they do not support the changing of
> number of audio / video streams or parameters (not bitrate,
> since that changes all the time for real-world codecs),
> consider reporting it here, on ffmpeg-users or on trac.

I have tested ffplay and MPlayer and both of them are not able to handle the changes is parameters (codec is the same).
But I don't think that this is a bug of ffmpeg, I think it simply requires some buffer flush and/or reset, performed on parameters change (my applicaton is responsible of doing this, not ffmpeg).

I try to better explain why I say this.

The change in stream parameters and streams number occures because custom ffmpeg I/O callbacks read two different movie files one after the other with seamless transition between the first and the second.
On file change, the application recieve a notification that informs it that next packets will come from the second file.

Xine media player has an implementation for this particular custom input as well as for this type of notification and performs a stream reset when it occurs. If I simply cat the two files and I try to load it with xine as a normal movie files, it gives me corrupted output after the parameters' change. (Like ffplay and MPlayer).
Therefore this suggests that I simply need to reset some FFmpeg buffers from within my application, but I actually don't know which one.

I've uploaded the two single files as well as the cat'ed version, you can find it here:
file1: https://dl.dropbox.com/u/11879013/file1.m2ts
file2: https://dl.dropbox.com/u/11879013/file2.m2ts
combined: https://dl.dropbox.com/u/11879013/combined.m2ts

Have you got any hint? Am I missing something?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121104/f0109e1c/attachment.html>

From info at denisgottardello.it Sun Nov 4 16:40:39 2012
From: info at denisgottardello.it (Denis)
Date: Sun, 4 Nov 2012 16:40:39 +0100
Subject: [Libav-user] muxing.c, memory leak
In-Reply-To: <loom.20121104T142605-596@post.gmane.org>
References: <201210280856.02282.info@denisgottardello.it>
<loom.20121104T142605-596@post.gmane.org>
Message-ID: <201211041640.39753.info@denisgottardello.it>

In data domenica 04 novembre 2012 14:26:56, Carl Eugen Hoyos ha scritto:
> > Using valgrind I obtain a memory leak in this row:
> >
> >
> > "pAVStream= add_video_stream(pAVFormatContext, &pAVCodec,
> > pAVOutputFormat->video_codec);"
> > if (posix_memalign(&ptr,ALIGN,size))
> > av_mallocz in mem.c:95
>
> Could you post the valgrind output?


4 bytes in 1 blocks are definitely lost in loss record 1 of 6
in muxing(char const*) in main.cpp:275
1: malloc in /tmp/buildd/valgrind-3.6.0~svn11254+nmu1/coregrind/m_replacemalloc/vg_replace_malloc.c:236
2: realloc in /tmp/buildd/valgrind-3.6.0~svn11254+nmu1/coregrind/m_replacemalloc/vg_replace_malloc.c:525
3: avformat_new_stream in <a href="file:///home/denis/C++/ffmpeg-1.0/libavformat/utils.c:3138"
>/home/denis/C++/ffmpeg-1.0/libavformat/utils.c:3138</a>
4: muxing(char const*) in <a href="file:///home/denis/C++/QtFFmpeg/QtFFmpegEncoder/C/C-build-desktop/../C/main.cpp:275" >main.cpp:275</a>
5: main in <a href="file:///home/denis/C++/QtFFmpeg/QtFFmpegEncoder/C/C-build-desktop/../C/main.cpp:324" >main.cpp:324</a>






--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php

From info at denisgottardello.it Sun Nov 4 16:48:44 2012
From: info at denisgottardello.it (Denis)
Date: Sun, 4 Nov 2012 16:48:44 +0100
Subject: [Libav-user] muxing.c, memory leak
In-Reply-To: <loom.20121104T142605-596@post.gmane.org>
References: <201210280856.02282.info@denisgottardello.it>
<loom.20121104T142605-596@post.gmane.org>
Message-ID: <201211041648.44456.info@denisgottardello.it>

In data domenica 04 novembre 2012 14:26:56, Carl Eugen Hoyos ha scritto:
> > Using valgrind I obtain a memory leak in this row:
> >
> >
> > "pAVStream= add_video_stream(pAVFormatContext, &pAVCodec,
> > pAVOutputFormat->video_codec);"
> > if (posix_memalign(&ptr,ALIGN,size))
> > av_mallocz in mem.c:95
>
> Could you post the valgrind output?

4 bytes in 1 blocks are definitely lost in loss record 1 of 6
in muxing(char const*) in main.cpp:275
1: malloc in /tmp/buildd/valgrind-3.6.0~svn11254+nmu1/coregrind/m_replacemalloc/vg_replace_malloc.c:236
2: realloc in /tmp/buildd/valgrind-3.6.0~svn11254+nmu1/coregrind/m_replacemalloc/vg_replace_malloc.c:525
3: avformat_new_stream in <a href="file:///home/denis/C++/ffmpeg-1.0/libavformat/utils.c:3138"
>/home/denis/C++/ffmpeg-1.0/libavformat/utils.c:3138</a>
4: muxing(char const*) in <a href="file:///home/denis/C++/QtFFmpeg/QtFFmpegEncoder/C/C-build-desktop/../C/main.cpp:275" >main.cpp:275</a>
5: main in <a href="file:///home/denis/C++/QtFFmpeg/QtFFmpegEncoder/C/C-build-desktop/../C/main.cpp:324" >main.cpp:324</a>


The row 275 is below:

pAVStream= add_video_stream(pAVFormatContext, &pAVCodec, pAVOutputFormat->video_codec);

that has the correspondent row

close_video(pAVFormatContext, pAVStream);
for (int count= 0; count< pAVFormatContext->nb_streams; count++) {
av_freep(&pAVFormatContext->streams[count]->codec);
av_freep(&pAVFormatContext->streams[count]);
}
if (!(pAVOutputFormat->flags & AVFMT_NOFILE))avio_close(pAVFormatContext->pb);
//av_free(pAVFormatContext);
avformat_free_context(pAVFormatContext);




but valgrind says detects memory leak.


--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121104/89091bc6/attachment.html>

From cehoyos at ag.or.at Sun Nov 4 19:20:56 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Sun, 4 Nov 2012 18:20:56 +0000 (UTC)
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
References: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>
<loom.20121103T164756-591@post.gmane.org>
<B3D40133-7E0C-40B3-9A5A-A38E15FFDBBB@email.it>
Message-ID: <loom.20121104T191516-454@post.gmane.org>

Andrea3000 <andrea3000 at ...> writes:

> file1:?https://dl.dropbox.com/u/11879013/file1.m2ts
>
> file2:?https://dl.dropbox.com/u/11879013/file2.m2ts

I also tested cat file1.m2ts file1.m2ts >test1.ts and
cat file2.m2ts file2.m2ts >test2.ts (and WMP) and it
could be possible that this cannot be supported due
to different encoding parameters.

If you know that the stream changes, you can always
close and reopen the decoder.

Carl Eugen


From andrea3000 at email.it Sun Nov 4 20:08:07 2012
From: andrea3000 at email.it (Andrea3000)
Date: Sun, 4 Nov 2012 20:08:07 +0100
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
In-Reply-To: <loom.20121104T191516-454@post.gmane.org>
References: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>
<loom.20121103T164756-591@post.gmane.org>
<B3D40133-7E0C-40B3-9A5A-A38E15FFDBBB@email.it>
<loom.20121104T191516-454@post.gmane.org>
Message-ID: <B313F066-A464-4A5D-B538-9EDE6901DA3D@email.it>

>> file1: https://dl.dropbox.com/u/11879013/file1.m2ts
>>
>> file2: https://dl.dropbox.com/u/11879013/file2.m2ts
>
> I also tested cat file1.m2ts file1.m2ts >test1.ts and
> cat file2.m2ts file2.m2ts >test2.ts (and WMP) and it
> could be possible that this cannot be supported due
> to different encoding parameters.

I don't understand if "cat file1.m2ts file1.m2ts >test1.ts" and "cat file2.m2ts file2.m2ts >test2.ts" do work for you.
I can play them correctly.

> If you know that the stream changes, you can always
> close and reopen the decoder.

Is it possibile to update the AVCodecContext or AVFormatContext instead of closing and re-opening the decoder?
Or is the decoder close and reopen fast enough to provide a smooth playback upon stream change?

Thank you
Regards

Andrea



From cehoyos at ag.or.at Sun Nov 4 20:36:42 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Sun, 4 Nov 2012 19:36:42 +0000 (UTC)
Subject: [Libav-user] How to handle dynamically changing streams /
parameters / codecs?
References: <4163B86F-B2AD-4177-9478-ADC40D5C1D7B@email.it>
<loom.20121103T164756-591@post.gmane.org>
<B3D40133-7E0C-40B3-9A5A-A38E15FFDBBB@email.it>
<loom.20121104T191516-454@post.gmane.org>
<B313F066-A464-4A5D-B538-9EDE6901DA3D@email.it>
Message-ID: <loom.20121104T203420-43@post.gmane.org>

Andrea3000 <andrea3000 at ...> writes:

>
> >> file1: https://dl.dropbox.com/u/11879013/file1.m2ts
> >>
> >> file2: https://dl.dropbox.com/u/11879013/file2.m2ts
> >
> > I also tested cat file1.m2ts file1.m2ts >test1.ts and
> > cat file2.m2ts file2.m2ts >test2.ts (and WMP) and it
> > could be possible that this cannot be supported due
> > to different encoding parameters.
>
> I don't understand if "cat file1.m2ts file1.m2ts >test1.ts"
> and "cat file2.m2ts file2.m2ts >test2.ts"
> do work for you.
> I can play them correctly.

test1.ts and test2.ts work for me and I therefore believe
it is possible that your original problem cannot be (easily)
fixed. But that doesn't mean you cannot open a ticket on trac.

> > If you know that the stream changes, you can always
> > close and reopen the decoder.
>
> Is it possibile to update the AVCodecContext or
AVFormatContext instead of closing and re-opening the
> decoder?

> Or is the decoder close and reopen fast enough to
> provide a smooth playback upon stream change?

Perhaps you should test this...

Carl Eugen


From giorgio9 at libero.it Thu Nov 1 23:38:10 2012
From: giorgio9 at libero.it (giorgio9 at libero.it)
Date: Thu, 1 Nov 2012 23:38:10 +0100 (CET)
Subject: [Libav-user] an help to write a player and avformat_find_streaminfo
Message-ID: <20846.9333681351809490335.JavaMail.defaultUser@defaultHost>

Hi all I need your help. I'm trying to develop a simple decoder to
decode and show a network video streaming. Source generates a video
stream mpeg4 that does not have visual Object Sequence Start (VOSS) and
the Visual Object Start (VOS) headers.

VOSS = 000001B0 00000000; VOS = 000001B5 00000005

because there aren't mandatory in Elementary Stream.
I write a source where the decoder with avformat_open_input read an sdp
file and after it call the function avformat_find_stream_info, that if I
don't make mistakes, tries to understand type of flux and populate
AVFormatContext. After it calls the other function in order to populate
correctly AVCodecContext and AVCodec.
The problem is that:
1) if the stream arrive when I launch the program (just immediately) I
see the video with good quality but with 3 seconds of delay. But if in
these situation I kill the source and relaunch it (and I leave alive the
program in decoding) after I can see a very good and real time
streaming, probably because the decoder has loaded well all the
parameter and can decode in real time without the delay that I have
during the first launch

2) Instead if I launch the program and after for example 10 seconds I
start the source, the video is also very slowly. It seems that it does
not see the correctly frame rate.

I don't have all these problems with a source that generates correctly
voss and vos. So I thought to following solutions but I need your help,
considering that I can work only at the decoder side.

1) To find the point where I can add manuall voss and vos into the
decoder as the should be in the stream produced by the source. If this
can be a right workaround in which library I can make this modification.
2) I thought that another solution can be that of passing all the data
to AVCodec AVCodecContext and AVFormatContext programmatically and not
using avformat_find_streaminfo. But this is possible with
avformat_alloc_context??

Please help me because I need to have a custom decoder for that stream.
Thanks to all for the attention
G.


From steve.hart at rtsw.co.uk Sat Nov 3 17:33:30 2012
From: steve.hart at rtsw.co.uk (Steve Hart)
Date: Sat, 3 Nov 2012 16:33:30 +0000
Subject: [Libav-user] record and stream at the same time
Message-ID: <CADzc0bqFFgJ5RU9tR1SEd-56GX5WtHjFFFazGXb-w8ckDFdmeA@mail.gmail.com>

Hi,
I am trying to encode a movie to disk and at the same time make this
available to a process for playing. I also need to be able to seek within
that stream. Think of a disk recorder.

I have tried a separate encode and decode i.e. I encode and save to disk in
one thread and open and decode the same file in another thread. This does
not seem to work since av_read_frame stalls after reading a number of
frames. If I close and reopen the file it will read a bit further and then
stop. I ensure that the decode never tries to read past what is being
written.

I don't know if this is a sensible approach and would appreciate any
comments or pointers to a better one.

Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121103/f9579810/attachment.html>

From snehalp277 at yahoo.com Mon Nov 5 11:24:17 2012
From: snehalp277 at yahoo.com (snehal parmar)
Date: Mon, 5 Nov 2012 02:24:17 -0800 (PST)
Subject: [Libav-user] Any resolution limit for MJPEG decoder ?
Message-ID: <1352111057.92137.BPMail_high_carrier@web164905.mail.bf1.yahoo.com>



Hello,

I wish to use ffmpeg to play MJPEG stream on a video viewer.
Wanted to know if there is any limit on resolution supported ?
Is there any MAX Width and MAX Height

For my application max requirement is to decode "4000 x 3000". Looks like it is supported.

Looking at the mjpeg decoder source file in ffmpeg library(ffmpeg-1.0/libavcodec/mjpegdec.c)
A function named "av_image_check_size" is used to check image size. Following is the code snippet :
-----------------------xx----------
int av_image_check_size(...)
{
...
if ((int)w>0 && (int)h>0 && (w+128)*(uint64_t)(h+128) < INT_MAX/8)
return 0;
...
}
Note: This function is defined in "libavutil/imgutils.c" file.
----------------------xx-----------
INT_MAX = 2,14,74,83,647
This means resolution of?little more than "16,000 x 16,000" is supported. Just considered 1:1 ratio here.

Is my understanding correct ?

Thank You,
Snehal



From rogerdpack2 at gmail.com Mon Nov 5 11:26:51 2012
From: rogerdpack2 at gmail.com (Roger Pack)
Date: Mon, 5 Nov 2012 03:26:51 -0700
Subject: [Libav-user] How can we set frame size for live stream?
In-Reply-To: <005a01cdb762$6c9981d0$45cc8570$@rdnet.fi>
References: <005a01cdb762$6c9981d0$45cc8570$@rdnet.fi>
Message-ID: <CAL1QdWfDBDUfP2nDz4d+1gp92g==+=MDG-PC_ygeERWeFM7wpw@mail.gmail.com>

On 10/31/12, Wenpeng Zhou <zhou.wenpeng at rdnet.fi> wrote:
> Hi,
>
> I use the USB camera to capture live video.
>
> The default codec context is:
> Input #0, dshow, from 'video=Mercury USB2.0 Camera':
> Duration: N/A, start: 58890.453000, bitrate: N/A
> Stream #0:0: Video: mjpeg, yuvj422p, 1280x720, 30 tbr, 10000k tbn, 30
> tbc
>
> I hope to change frame size from 1280x720 to 640x480.
>
> I tried
> AVCodecContext
> pCodecCtx->width = 640;
> pCodecCtx->height = 480;
>
> But this did not work.

My guess is you set a "key value pair" like
http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=274

From stefasab at gmail.com Tue Nov 6 01:32:04 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Tue, 6 Nov 2012 01:32:04 +0100
Subject: [Libav-user] How can we set frame size for live stream?
In-Reply-To: <005a01cdb762$6c9981d0$45cc8570$@rdnet.fi>
References: <005a01cdb762$6c9981d0$45cc8570$@rdnet.fi>
Message-ID: <20121106003204.GA2223@arborea>

In data Wednesday 2012-10-31 14:22:40 +0200, Wenpeng Zhou ha scritto:
> Hi,
>
> I use the USB camera to capture live video.
>
> The default codec context is:
> Input #0, dshow, from 'video=Mercury USB2.0 Camera':
> Duration: N/A, start: 58890.453000, bitrate: N/A
> Stream #0:0: Video: mjpeg, yuvj422p, 1280x720, 30 tbr, 10000k tbn, 30
> tbc
>
> I hope to change frame size from 1280x720 to 640x480.
>
> I tried
> AVCodecContext
> pCodecCtx->width = 640;
> pCodecCtx->height = 480;
>
> But this did not work.

You can set the options in the dshow device, using the *private*
option "video_size":
http://ffmpeg.org/ffmpeg.html#dshow

Note that the input device may not honour the exact video size, or may
choose a video size that is "reasonably" similar to the one requested
(need to check the code for that, documentation updates are welcome).

>From a programmatic point of view, you need to set the options for the
input format context in a dictionary and pass it to
avformat_open_input(), something like:

AVDictionary *options = NULL;
av_dict_set(&options, "video_size", "640x480", 0);
av_dict_set(&options, "pixel_format", "rgb24", 0);

if (avformat_open_input(&s, url, NULL, &options) < 0)
abort();
...
av_dict_free(&options);

Check avformat.h doxy for more information.

From zhou.wenpeng at rdnet.fi Tue Nov 6 10:51:39 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Tue, 6 Nov 2012 11:51:39 +0200
Subject: [Libav-user] How can we set frame size for live stream?
In-Reply-To: <20121106003204.GA2223@arborea>
References: <005a01cdb762$6c9981d0$45cc8570$@rdnet.fi>
<20121106003204.GA2223@arborea>
Message-ID: <000001cdbc04$51bf6040$f53e20c0$@rdnet.fi>



>-----Original Message-----
>From: libav-user-bounces at ffmpeg.org [mailto:libav-user-
>bounces at ffmpeg.org] On Behalf Of Stefano Sabatini
>Sent: 6. marraskuuta 2012 2:32
>To: This list is about using libavcodec, libavformat, libavutil,
libavdevice and
>libavfilter.
>Subject: Re: [Libav-user] How can we set frame size for live stream?
>
>> Hi,
>>
>> I use the USB camera to capture live video.
>>
>> The default codec context is:
>> Input #0, dshow, from 'video=Mercury USB2.0 Camera':
>> Duration: N/A, start: 58890.453000, bitrate: N/A
>> Stream #0:0: Video: mjpeg, yuvj422p, 1280x720, 30 tbr, 10000k tbn,
>> 30 tbc
>>
>> I hope to change frame size from 1280x720 to 640x480.
>>
>> I tried
>> AVCodecContext
>> pCodecCtx->width = 640;
>> pCodecCtx->height = 480;
>>
>> But this did not work.
>
>You can set the options in the dshow device, using the *private* option
>"video_size":
>http://ffmpeg.org/ffmpeg.html#dshow
>
>Note that the input device may not honour the exact video size, or may
>choose a video size that is "reasonably" similar to the one requested (need
to
>check the code for that, documentation updates are welcome).
>
>From a programmatic point of view, you need to set the options for the
input
>format context in a dictionary and pass it to avformat_open_input(),
>something like:
>
>AVDictionary *options = NULL;
>av_dict_set(&options, "video_size", "640x480", 0); av_dict_set(&options,
>"pixel_format", "rgb24", 0);
>
>if (avformat_open_input(&s, url, NULL, &options) < 0)
> abort();
>...
>av_dict_free(&options);
>
>Check avformat.h doxy for more information.

Hi Stefano,

Thanks very much !
This is really what I want.

_____________________________________________
>Libav-user mailing list
>Libav-user at ffmpeg.org
>http://ffmpeg.org/mailman/listinfo/libav-user


From miguelandres77 at hotmail.com Tue Nov 6 23:42:16 2012
From: miguelandres77 at hotmail.com (Miguel Brito)
Date: Tue, 6 Nov 2012 17:42:16 -0500
Subject: [Libav-user] Sending H.263 over RTP
Message-ID: <BAY173-W206A5A3873F5F50549934ADF6B0@phx.gbl>


Hi everyone!

I'm streaming video in H263/RTP using libav on Windows.?Here's what i'm doing:?

First i initialize my output format (with "rtp") and codec H263/H263p (CODEC_ID_H263P or?CODEC_ID_H263 passed to?avcodec_find_encoder), then I start a loop to capture the video from the webcam (using OpenCV) and turn the frame to an AVFrame to finally encode and write (send it).

My problem is that i want (and i need) the stream to be in H263 (1996) or H263+ (1998), but no matter what i pass to?avcodec_find_encoder i get an RTP in H263++ (2000) .

At least the only way to demux it is including "a=rtpmap:96 H263-2000/90000" in the receiver's SDP.

In RFC 4629 i read that H263 is a subset of H263+ and H263++ so i guess there must be a way to send it using those codecs, is there a way? or i may be missing something?

PS: one thing that confuses me is that when i try to send a video with an incorrect resolution for H263 it does complain about it, but with correct sizes it sends it in H263++ anyway.

Thanks!

From mark.kenna at sureviewsystems.com Thu Nov 8 12:48:53 2012
From: mark.kenna at sureviewsystems.com (Mark Kenna)
Date: Thu, 8 Nov 2012 11:48:53 +0000
Subject: [Libav-user] First frame not keyframe
Message-ID: <CAAxE61i_bcAiYAkAwm4NrKJEjqMGb2=uwZ5O9v8SyEhXtsv_1w@mail.gmail.com>

Hi Guys

I have an interesting issue where on trying to decode a mpeg1video file I
get:

[mpeg1video @ 01d16900] ignoring pic after 100
> [mpeg1video @ 01d16900] Missing picture start code
> [mpeg1video @ 01d16900] warning: first frame is no keyframe


After doing some research it appears that FFMpeg will drop the first frame
if it's deemed "broken, corrupt or not a keyframe", but it seems that the
first frame is present and looks perfectly valid when playing through
VLC/MPC. I have taken the following logs which indicate that the first
frame is indeed a non-keyframe. My questions is whether there is any way to
salvage the first frame (which is quite important for me) using FFMpeg?

Frame dump from FFProbe (I've cut down the info from the second frame)

[FRAME]
media_type=video
key_frame=0
pkt_pts=48000
pkt_pts_time=0.040000
pkt_dts=48000
pkt_dts_time=0.040000
pkt_duration=48000
pkt_duration_time=0.040000
pkt_pos=4096
width=320
height=240
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=?
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
reference=3
[/FRAME]

[FRAME]
media_type=video
key_frame=0
pict_type=P
[/FRAME]

[FRAME]
key_frame=0
pict_type=P
[/FRAME]

[FRAME]
key_frame=0
pict_type=P
[/FRAME]

[FRAME]
key_frame=0
pict_type=P
[/FRAME]

[FRAME]
key_frame=1
pict_type=I
[/FRAME]


Packet dump from FFProbe

[PACKET]
codec_type=video
stream_index=0
pts=0
pts_time=0.000000
dts=0
dts_time=0.000000
duration=48000
duration_time=0.040000
size=4883
pos=N/A
flags=K
[/PACKET]

[PACKET]
codec_type=video
stream_index=0
pts=48000
pts_time=0.040000
dts=48000
dts_time=0.040000
duration=48000
duration_time=0.040000
size=28
pos=4096
flags=_
[/PACKET]

[PACKET]
codec_type=video
stream_index=0
pts=96000
pts_time=0.080000
dts=96000
dts_time=0.080000
duration=48000
duration_time=0.040000
size=28
pos=N/A
flags=_
[/PACKET]

Thanks guys,
Mark.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121108/83c43fe0/attachment.html>

From cehoyos at ag.or.at Thu Nov 8 15:25:07 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 8 Nov 2012 14:25:07 +0000 (UTC)
Subject: [Libav-user] First frame not keyframe
References: <CAAxE61i_bcAiYAkAwm4NrKJEjqMGb2=uwZ5O9v8SyEhXtsv_1w@mail.gmail.com>
Message-ID: <loom.20121108T152339-926@post.gmane.org>

Mark Kenna <mark.kenna at ...> writes:

> After doing some research it appears that FFMpeg
> will drop the first frame if it's deemed "broken,
> corrupt or not a keyframe", but it seems that the
> first frame is present and looks perfectly valid

Please provide a sample.

(Generally, please don't forget to always post your
command line together with complete, uncut console
output. All help will be the result of guessing
without it.)

Carl Eugen


From cehoyos at ag.or.at Thu Nov 8 15:26:46 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 8 Nov 2012 14:26:46 +0000 (UTC)
Subject: [Libav-user] Any resolution limit for MJPEG decoder ?
References: <1352111057.92137.BPMail_high_carrier@web164905.mail.bf1.yahoo.com>
Message-ID: <loom.20121108T152610-145@post.gmane.org>

snehal parmar <snehalp277 at ...> writes:

> I wish to use ffmpeg to play MJPEG stream on a video viewer.
> Wanted to know if there is any limit on resolution supported ?

Please test, if you have an unsupported input file, please report
it!

Carl Eugen


From info at denisgottardello.it Thu Nov 8 22:33:15 2012
From: info at denisgottardello.it (Denis)
Date: Thu, 8 Nov 2012 22:33:15 +0100
Subject: [Libav-user] vlc, latency
Message-ID: <201211082233.15178.info@denisgottardello.it>


Is there a way for get vlc streaming very quickly and fast in local network
without latency?



--
www.denisgottardello.it
Skype: mrdebug
Videosurveillance and home automation!
http://www.denisgottardello.it/DomusBoss/DomusBossIndice.php

From x.morion.x at gmail.com Fri Nov 9 03:44:22 2012
From: x.morion.x at gmail.com (Aleksey Shubin)
Date: Fri, 9 Nov 2012 13:44:22 +1100
Subject: [Libav-user] Access violation in av_opt_set
Message-ID: <CAPo94BjyAXBeXuLM5w1d4UYc4Uvssiboc44Hk5uC3=g_0cw0aQ@mail.gmail.com>

I'm making changes in Windows application that uses ffmpeg API to
record video. Previously I was using Zeranoe ffmpeg builds but now I
need to compile it myself. I have managed to do it using MinGW. But
the problem is that my build produces access violation at the
following code line:

av_opt_set(codecContex->priv_data, "crf", crfString, 0);

I have checked crfString, it is "20.000000". Also checked ffmpeg code
and found that the codecContex->priv_data should be X264Context
pointer, so I included x264 headers and checked priv_data. It seems to
be valid.

Also tried to set some other options with av_opt_set ("wpredp",
"preset"). They don't produce the error but also seems not to work as
the corresponding fields of priv_data aren't changed.

With Zeranoe build this code works, so the only suggestion I have is
that something is wrong in the way I configured and build ffmpeg or
libx264. This is my configure command I used for ffmpeg and libx264:

./configure --prefix=build --enable-shared --enable-avresample
--disable-stripping --enable-libx264 --enable-encoder=libx264
--enable-gpl --enable-muxer=ismv --extra-cflags="
-I/c/Temp/FFMpeg/x264" --extra-ldflags="-L/c/Temp/FFMpeg/x264"

./configure --disable-cli --enable-shared
--extra-ldflags=-Wl,--output-def=libx264.def

Unfortunately I have very few experience in C/C++ and
configuring/building ffmpeg and don't know even the typical ways to
debug this kind of problems. So even general suggestions about a
direction at which I should investigate further could help.

Thanks,
Aleksey

From bruce at spearmorgan.com Fri Nov 9 17:57:20 2012
From: bruce at spearmorgan.com (Bruce Wheaton)
Date: Fri, 9 Nov 2012 08:57:20 -0800
Subject: [Libav-user] FF_API_PIX_FMT, PixelFormat and AVPixelFormat Changes
Message-ID: <D2FB1D4C-1C41-4115-B5FC-97E3F10972AC@spearmorgan.com>

Is there any info about the changes in this area? It caused some problems in my code when the definition of PixelFormat (instead of a type/enum named PixelFormat) seemed more invasive with other namespaces.

I undef'ed pixelformat, and used AVPixelFormat in my code. But I see there's a switch triggered by FF_API_PIX_FMT, and I wonder what's up?

Bruce

From miscab at gmail.com Sun Nov 11 04:42:50 2012
From: miscab at gmail.com (=?gb2312?B?zuLBosP3IEpveSBXb28=?=)
Date: Sun, 11 Nov 2012 11:42:50 +0800
Subject: [Libav-user] How can we make two NTP-sync computers appears the
same while starting playing the same media file at the same time?
Message-ID: <00b901cdbfbe$a5db3330$f1919990$@gmail.com>

Hi folks,



I got a problem while synchronizing two computers? playing.



I have two computers of the same hardware and software, and both of them are
sync with a NTP server in the same LAN. Then I wrote a libav program running
on both of them to play the same media file (for example, test.mov) at the
same beginning time(for example, 13:10:05). Internally, a timer is set up to
check the timeout with an interval of 1 second.



But unfortunately I found out that the pictures are the same at the very
beginning, but gradually, the pictures start to differ greatly as time went
by. It?s obviously that one plays fast and the other plays pretty slow.



So how to make the two computers play media files with better
synchronization?



Thanks a million!



Joy Woo

11/11/12

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121111/29c796de/attachment.html>

From zhou.wenpeng at rdnet.fi Mon Nov 12 11:07:51 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Mon, 12 Nov 2012 12:07:51 +0200
Subject: [Libav-user] How can we use ffserver with Windows
Message-ID: <008401cdc0bd$93dfc4c0$bb9f4e40$@rdnet.fi>

Hi,
Can we use ffsever with windows?
I download ffmpeg from http://ffmpeg.zeranoe.com/builds/
But there is no ffserver.

So how can I get fferver for windows?


From zhou.wenpeng at rdnet.fi Mon Nov 12 14:00:22 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Mon, 12 Nov 2012 15:00:22 +0200
Subject: [Libav-user] How can we save captured frames to AVI file
Message-ID: <00a201cdc0d5$ad577750$080665f0$@rdnet.fi>

Hi,

I use a USB camera to capture streams. I hope to save streams to an AVI
file.
How can I do ?

Thanks!


From rwoods at vaytek.com Mon Nov 12 21:50:24 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Mon, 12 Nov 2012 14:50:24 -0600
Subject: [Libav-user] filtering_audio.c example not working
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>

I am trying out the filtering_audio.c example provided with the ffmpeg libraries for Windows to extract the audio from a MP4 file, resample to 8 KHz and convert from stereo to mono. The example pipes the audio output via stdout to ffplay. I am using Visual Studio 2010 and the example successfully builds and runs but the result is clearly not the desired result. At the end of init_filters I added a call to avfilter_graph_dump() and it all looks correct and also the pipe to ffplay as in this trace:

abuffer filter args: time_base=1/24000:sample_rate=24000:sample_fmt=s16:channel_layout=0x4
Output: srate:8000Hz fmt:s16 chlayout:mono
+-----------+
| in |default--[24000Hz s16:mono]--Parsed_aconvert_0:default
| (abuffer) |
+-----------+

+-----------------+
Parsed_aresample_1:default--[8000Hz s16:mono]--default| out |
| (ffabuffersink) |
+-----------------+

+-------------------+
in:default--[24000Hz s16:mono]--default| Parsed_aconvert_0 |default--[24000Hz s16:mono]-Parsed_aresample_1:default
| (aconvert) |
+-------------------+

+--------------------+
Parsed_aconvert_0:default--[24000Hz s16:mono]--default| Parsed_aresample_1 |default--[8000Hz s16:mono]--out:default
| (aresample) |
+--------------------+
[s16le @ 003edda0] Invalid sample rate 0 specified using default of 44100
[s16le @ 003edda0] Estimating duration from bitrate, this may be inaccurate
Input #0, s16le, from 'pipe:':
Duration: N/A, start: 0.000000, bitrate: 128 kb/s
Stream #0:0: Audio: pcm_s16le, 8000 Hz, 1 channels, s16, 128 kb/s

If you have made this example run properly on Windows in VS 2010, would you please provide any tips or changes you made for it to work?


From rogerdpack2 at gmail.com Tue Nov 13 06:05:59 2012
From: rogerdpack2 at gmail.com (Roger Pack)
Date: Mon, 12 Nov 2012 22:05:59 -0700
Subject: [Libav-user] vlc, latency
In-Reply-To: <201211082233.15178.info@denisgottardello.it>
References: <201211082233.15178.info@denisgottardello.it>
Message-ID: <CAL1QdWfqVd7Vzg_fFULzsdNdhkS1dRUSYMGxsuf-e458a06_1Q@mail.gmail.com>

On 11/8/12, Denis <info at denisgottardello.it> wrote:
>
> Is there a way for get vlc streaming very quickly and fast in local network
>
> without latency?

Wrong mailing list?

From rogerdpack2 at gmail.com Tue Nov 13 06:06:38 2012
From: rogerdpack2 at gmail.com (Roger Pack)
Date: Mon, 12 Nov 2012 22:06:38 -0700
Subject: [Libav-user] How can we make two NTP-sync computers appears the
same while starting playing the same media file at the same time?
In-Reply-To: <00b901cdbfbe$a5db3330$f1919990$@gmail.com>
References: <00b901cdbfbe$a5db3330$f1919990$@gmail.com>
Message-ID: <CAL1QdWdKKy4aiRckpFuy_NJtpEcbj_V=F7NwKmQfdK02YvsQ+A@mail.gmail.com>

mplayer has a udp sync option that may be useful to you.

On 11/10/12, ??? Joy Woo <miscab at gmail.com> wrote:
> Hi folks,
>
>
>
> I got a problem while synchronizing two computers? playing.
>
>
>
> I have two computers of the same hardware and software, and both of them
> are
> sync with a NTP server in the same LAN. Then I wrote a libav program
> running
> on both of them to play the same media file (for example, test.mov) at the
> same beginning time(for example, 13:10:05). Internally, a timer is set up
> to
> check the timeout with an interval of 1 second.
>
>
>
> But unfortunately I found out that the pictures are the same at the very
> beginning, but gradually, the pictures start to differ greatly as time went
> by. It?s obviously that one plays fast and the other plays pretty slow.
>
>
>
> So how to make the two computers play media files with better
> synchronization?
>
>
>
> Thanks a million!
>
>
>
> Joy Woo
>
> 11/11/12
>
>

From rogerdpack2 at gmail.com Tue Nov 13 06:07:14 2012
From: rogerdpack2 at gmail.com (Roger Pack)
Date: Mon, 12 Nov 2012 22:07:14 -0700
Subject: [Libav-user] How can we use ffserver with Windows
In-Reply-To: <008401cdc0bd$93dfc4c0$bb9f4e40$@rdnet.fi>
References: <008401cdc0bd$93dfc4c0$bb9f4e40$@rdnet.fi>
Message-ID: <CAL1QdWcKPXXaRnNHutnwHhT++Pyp3knPsY8iQu39GA9A9=vRZg@mail.gmail.com>

On 11/12/12, Wenpeng Zhou <zhou.wenpeng at rdnet.fi> wrote:
> Hi,
> Can we use ffsever with windows?
> I download ffmpeg from http://ffmpeg.zeranoe.com/builds/
> But there is no ffserver.
>
> So how can I get fferver for windows?

build/run it under cygwin perhaps?
There are other servers that may work in windows...

From Tom.Isaacson at navico.com Tue Nov 13 06:12:34 2012
From: Tom.Isaacson at navico.com (Tom Isaacson)
Date: Tue, 13 Nov 2012 05:12:34 +0000
Subject: [Libav-user] Displaying video in Windows
Message-ID: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>

I've looked at the source for ffplay and can see that the video is displayed using SDL. But as far as I can tell this doesn't actually use a real window in Windows, it just creates a space on the screen then watches for mouse events in the area.

Are there any examples that show how to play video in Windows using the native functions rather than SDL?

Thanks.




From miscab at gmail.com Tue Nov 13 09:12:44 2012
From: miscab at gmail.com (=?utf-8?B?5ZC056uL5piOIEpveSBXb28=?=)
Date: Tue, 13 Nov 2012 16:12:44 +0800
Subject: [Libav-user] =?utf-8?b?562U5aSNOiAgSG93IGNhbiB3ZSBtYWtlIHR3byBO?=
=?utf-8?q?TP-sync_computers_appears_the_same_while_starting_playin?=
=?utf-8?q?g_the_same_media_file_at_the_same_time=3F?=
In-Reply-To: <CAL1QdWdKKy4aiRckpFuy_NJtpEcbj_V=F7NwKmQfdK02YvsQ+A@mail.gmail.com>
References: <00b901cdbfbe$a5db3330$f1919990$@gmail.com>
<CAL1QdWdKKy4aiRckpFuy_NJtpEcbj_V=F7NwKmQfdK02YvsQ+A@mail.gmail.com>
Message-ID: <006101cdc176$ab7ab5e0$027021a0$@gmail.com>

Thanks first, I'll try.

-----????-----
???: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] ?? Roger Pack
????: 2012?11?13? 13:07
???: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
??: Re: [Libav-user] How can we make two NTP-sync computers appears the same while starting playing the same media file at the same time?

mplayer has a udp sync option that may be useful to you.

On 11/10/12, ??? Joy Woo <miscab at gmail.com> wrote:
> Hi folks,
>
>
>
> I got a problem while synchronizing two computers? playing.
>
>
>
> I have two computers of the same hardware and software, and both of
> them are sync with a NTP server in the same LAN. Then I wrote a libav
> program running on both of them to play the same media file (for
> example, test.mov) at the same beginning time(for example, 13:10:05).
> Internally, a timer is set up to check the timeout with an interval of
> 1 second.
>
>
>
> But unfortunately I found out that the pictures are the same at the
> very beginning, but gradually, the pictures start to differ greatly as
> time went by. It?s obviously that one plays fast and the other plays pretty slow.
>
>
>
> So how to make the two computers play media files with better
> synchronization?
>
>
>
> Thanks a million!
>
>
>
> Joy Woo
>
> 11/11/12
>
>
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


From miscab at gmail.com Tue Nov 13 09:16:52 2012
From: miscab at gmail.com (=?gb2312?B?zuLBosP3IEpveSBXb28=?=)
Date: Tue, 13 Nov 2012 16:16:52 +0800
Subject: [Libav-user] =?gb2312?b?tPC4tDogIERpc3BsYXlpbmcgdmlkZW8gaW4gV2lu?=
=?gb2312?b?ZG93cw==?=
In-Reply-To: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
References: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
Message-ID: <006301cdc177$3f3b43d0$bdb1cb70$@gmail.com>

Qt has a multimedia framework named "Phonon" maybe helpful.

-----????-----
???: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org]
?? Tom Isaacson
????: 2012?11?13? 13:13
???: Libav-user at ffmpeg.org
??: [Libav-user] Displaying video in Windows

I've looked at the source for ffplay and can see that the video is displayed
using SDL. But as far as I can tell this doesn't actually use a real window
in Windows, it just creates a space on the screen then watches for mouse
events in the area.

Are there any examples that show how to play video in Windows using the
native functions rather than SDL?

Thanks.



_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


From mybrokenbeat at gmail.com Tue Nov 13 11:17:23 2012
From: mybrokenbeat at gmail.com (Oleg)
Date: Tue, 13 Nov 2012 12:17:23 +0200
Subject: [Libav-user] Displaying video in Windows
In-Reply-To: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
References: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
Message-ID: <101E0FF8-D371-4862-906B-93BCE018C81E@gmail.com>

The most easiest native way:

Decoding part:
Read thread: Open file, read packets and push them to audio and video shared queues;
Audio thread: Read packets from audio queue, decode and push to audio play queue(in PCM format);
Video thread: Read packets from video queue, decode, convert from YUV to RGB(using sws_scale) and push to video play queue(in RGB format).
Examples for this part are ffplay and Dranger's example (http://dranger.com/ffmpeg/);

Drawing part:
1. Create window and get HWND pointer;
2. Create OpenGL context and set pixel format for that window http://www.nullterminator.net/opengl32.html ;
3. Create timer for your window(WM_TIMER) with interval equal to video's 1second / frame rate;
4. When receiving timer event - draw next frame from video queue using glTexSubImage2d(probably, fastest way)\ glTexImage2d or glDrawBuffer;

Audio part:
1. Init XAudio2 and create media buffer with sample rate, frequency and channels same as in your video;
2. Attach callback for playing audio;
3. When callback is called - read next audio frame from audio queue and play it;

These steps are minimum for creating own video player which will work on windows 2000 and all older versions. You can change drawing part to DirectX API, but this way is not cross-platform.

After this you will have problems with audio\video synchronizations. How to solve them you may also see in ffplay or Dranger's example.
After this you will probably want to convert YUV to RGB on shader, but that is another story.

So, as you see it is not so easy as you may think. So ffplay is probably best way for fast development. Also you may try VLC's libraries APIs which are embedding directly to HWND. SDL may also create window using SDL_CreateWindowFrom, for example.


13.11.2012, ? 7:12, Tom Isaacson ???????(?):

> I've looked at the source for ffplay and can see that the video is displayed using SDL. But as far as I can tell this doesn't actually use a real window in Windows, it just creates a space on the screen then watches for mouse events in the area.
>
> Are there any examples that show how to play video in Windows using the native functions rather than SDL?
>
> Thanks.
>
>
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121113/af429cc9/attachment.html>

From Tom.Isaacson at navico.com Tue Nov 13 11:52:14 2012
From: Tom.Isaacson at navico.com (Tom Isaacson)
Date: Tue, 13 Nov 2012 10:52:14 +0000
Subject: [Libav-user] Displaying video in Windows
In-Reply-To: <101E0FF8-D371-4862-906B-93BCE018C81E@gmail.com>
References: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET
.INT><101E0FF8-D371-4862-906B-93BCE018C81E@gmail.com>
Message-ID: <B36E81988B048A4FA3600EDE585F756B028EE704@EXCH02-AKLNZ.MARINE.NET.INT>

I probably should have mentioned that the source is an RTSP server and it's only video, so audio\video synchronisation is not an issue. What I want to do is write a Windows 8 Metro app that displays this video stream. Unfortunately Microsoft don't support RTSP (see http://stackoverflow.com/questions/12994188/how-can-i-play-h-264-rtsp-video-in-windows-8-metro-c-sharp-xaml-app) so rather than attempt to write a media source plugin I'm going to try using VLC's libraries. Hopefully I'll also be able to recompile them for ARM to get it working on WinRT but that will come later.

Thanks for your help.

From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Oleg
Sent: Tuesday, 13 November 2012 11:17 p.m.
To: This list is about using libavcodec, libavformat, libavutil,libavdevice and libavfilter.
Subject: Re: [Libav-user] Displaying video in Windows

The most easiest native way:

Decoding part:
Read thread: Open file, read packets and push them to audio and video shared queues;
Audio thread: Read packets from audio queue, decode and push to audio play queue(in PCM format);
Video thread: Read packets from video queue, decode, convert from YUV to RGB(using sws_scale) and push to video play queue(in RGB format).
Examples for this part are ffplay and Dranger's example (http://dranger.com/ffmpeg/);

Drawing part:
1. Create window and get HWND pointer;
2. Create OpenGL context and set pixel format for that window?http://www.nullterminator.net/opengl32.html?;
3. Create timer for your window(WM_TIMER) with interval equal to video's 1second / frame rate;
4. When receiving timer event - draw next frame from video queue using glTexSubImage2d(probably, fastest way)\ glTexImage2d or glDrawBuffer;

Audio part:
1. Init XAudio2 and create media buffer with sample rate, frequency and channels same as in your video;
2. Attach callback for playing audio;
3. When callback is called - read next audio frame from audio queue and play it;

These steps are minimum for creating own video player which will work on windows 2000 and all older versions. You can change drawing part to DirectX API, but this way is not cross-platform.

After this you will have problems with audio\video synchronizations. How to solve them you may also see in ffplay or Dranger's example.
After this you will probably want to convert YUV to RGB on shader, but that is another story.

So, as you see it is not so easy as you may think. So ffplay is probably best way for fast development. Also you may try VLC's libraries APIs which are embedding directly to HWND. SDL may also create window using SDL_CreateWindowFrom, for example.


13.11.2012, ? 7:12, Tom Isaacson ???????(?):


I've looked at the source for ffplay and can see that the video is displayed using SDL. But as far as I can tell this doesn't actually use a real window in Windows, it just creates a space on the screen then watches for mouse events in the area.

Are there any examples that show how to play video in Windows using the native functions rather than SDL?

Thanks.



_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user




From i.like.privacy.too at gmail.com Tue Nov 13 17:48:26 2012
From: i.like.privacy.too at gmail.com (Camera Man)
Date: Tue, 13 Nov 2012 11:48:26 -0500
Subject: [Libav-user] Displaying video in Windows
In-Reply-To: <B36E81988B048A4FA3600EDE585F756B028EE704@EXCH02-AKLNZ.MARINE.NET.INT>
References: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET .INT><101E0FF8-D371-4862-906B-93BCE018C81E@gmail.com>
<B36E81988B048A4FA3600EDE585F756B028EE704@EXCH02-AKLNZ.MARINE.NET.INT>
Message-ID: <50A279DA.6070101@gmail.com>

An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121113/fbf4743f/attachment.html>

From zhou.wenpeng at rdnet.fi Tue Nov 13 22:03:27 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Tue, 13 Nov 2012 23:03:27 +0200
Subject: [Libav-user] How can we save live stream (frames) with C++
Message-ID: <001701cdc1e2$5457eaf0$fd07c0d0$@rdnet.fi>

Hi,

I will use dshow to capture live stream, then save individual frames to a
buffer. I plan to save the frames in the buffer to an AVI file with C++.

How can I save frames to an AVI file?

Thanks!


From rwoods at vaytek.com Tue Nov 13 22:30:59 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Tue, 13 Nov 2012 15:30:59 -0600
Subject: [Libav-user] How can we save live stream (frames) with C++
In-Reply-To: <001701cdc1e2$5457eaf0$fd07c0d0$@rdnet.fi>
References: <001701cdc1e2$5457eaf0$fd07c0d0$@rdnet.fi>
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82802@GODFATHERII.vaytek2003.local>

Take a look at the muxing.c example; but, instead of source coming from a generated frame as in fill_yuv_image(), use the frame coming from your buffer.
Use same idea for audio stream.

-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Wenpeng Zhou
Sent: Tuesday, November 13, 2012 3:03 PM
To: 'This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.'
Subject: [Libav-user] How can we save live stream (frames) with C++

Hi,

I will use dshow to capture live stream, then save individual frames to a buffer. I plan to save the frames in the buffer to an AVI file with C++.

How can I save frames to an AVI file?

Thanks!

_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From stefasab at gmail.com Tue Nov 13 23:12:19 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Tue, 13 Nov 2012 23:12:19 +0100
Subject: [Libav-user] How can we save captured frames to AVI file
In-Reply-To: <00a201cdc0d5$ad577750$080665f0$@rdnet.fi>
References: <00a201cdc0d5$ad577750$080665f0$@rdnet.fi>
Message-ID: <20121113221219.GK2615@arborea>

On date Monday 2012-11-12 15:00:22 +0200, Wenpeng Zhou wrote:
> Hi,
>
> I use a USB camera to capture streams. I hope to save streams to an AVI
> file.

Check dshow/video4linux2 devices in libavdevice (doc/indevs.texi, or
the ffmpeg manual). There is nothing special about an input device,
you treat it like any input format, demuxing.c example may help.

> How can I do ?
--
FFmpeg = Fanciful and Fanciful Mind-dumbing Portable Ecumenical Generator

From stefasab at gmail.com Tue Nov 13 23:19:17 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Tue, 13 Nov 2012 23:19:17 +0100
Subject: [Libav-user] filtering_audio.c example not working
In-Reply-To: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>
References: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>
Message-ID: <20121113221917.GL2615@arborea>

On date Monday 2012-11-12 14:50:24 -0600, Ron Woods wrote:
> I am trying out the filtering_audio.c example provided with the ffmpeg libraries for Windows to extract the audio from a MP4 file, resample to 8 KHz and convert from stereo to mono. The example pipes the audio output via stdout to ffplay. I am using Visual Studio 2010 and the example successfully builds and runs but the result is clearly not the desired result. At the end of init_filters I added a call to avfilter_graph_dump() and it all looks correct and also the pipe to ffplay as in this trace:
>
> abuffer filter args: time_base=1/24000:sample_rate=24000:sample_fmt=s16:channel_layout=0x4
> Output: srate:8000Hz fmt:s16 chlayout:mono
> +-----------+
> | in |default--[24000Hz s16:mono]--Parsed_aconvert_0:default
> | (abuffer) |
> +-----------+
>
> +-----------------+
> Parsed_aresample_1:default--[8000Hz s16:mono]--default| out |
> | (ffabuffersink) |
> +-----------------+
>
> +-------------------+
> in:default--[24000Hz s16:mono]--default| Parsed_aconvert_0 |default--[24000Hz s16:mono]-Parsed_aresample_1:default
> | (aconvert) |
> +-------------------+
>
> +--------------------+
> Parsed_aconvert_0:default--[24000Hz s16:mono]--default| Parsed_aresample_1 |default--[8000Hz s16:mono]--out:default
> | (aresample) |
> +--------------------+

> [s16le @ 003edda0] Invalid sample rate 0 specified using default of 44100

This is fishy.

> [s16le @ 003edda0] Estimating duration from bitrate, this may be inaccurate
> Input #0, s16le, from 'pipe:':
> Duration: N/A, start: 0.000000, bitrate: 128 kb/s
> Stream #0:0: Audio: pcm_s16le, 8000 Hz, 1 channels, s16, 128 kb/s
>
> If you have made this example run properly on Windows in VS 2010, would you please provide any tips or changes you made for it to work?

I just tried latest git and seems to work fine here (Linux). Does the
problem depend on the input file?
--
FFmpeg = Funny and Fierce Merciless Practical Extended Goblin

From rwoods at vaytek.com Tue Nov 13 23:26:13 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Tue, 13 Nov 2012 16:26:13 -0600
Subject: [Libav-user] filtering_audio.c example not working
In-Reply-To: <20121113221917.GL2615@arborea>
References: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>
<20121113221917.GL2615@arborea>
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82803@GODFATHERII.vaytek2003.local>

I have tried it on AVI or MOV input file as well -- same result; so, no, it doesn't seem to depend on input file.
I am using same filter technique in another context to transcode audio from input movie to WAV output file, and it works fine there.
Maybe it is something to do with the piping to ffplay that is not working right in Windows?

-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Stefano Sabatini
Sent: Tuesday, November 13, 2012 4:19 PM
To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
Subject: Re: [Libav-user] filtering_audio.c example not working

On date Monday 2012-11-12 14:50:24 -0600, Ron Woods wrote:
> I am trying out the filtering_audio.c example provided with the ffmpeg libraries for Windows to extract the audio from a MP4 file, resample to 8 KHz and convert from stereo to mono. The example pipes the audio output via stdout to ffplay. I am using Visual Studio 2010 and the example successfully builds and runs but the result is clearly not the desired result. At the end of init_filters I added a call to avfilter_graph_dump() and it all looks correct and also the pipe to ffplay as in this trace:
>
> abuffer filter args:
> time_base=1/24000:sample_rate=24000:sample_fmt=s16:channel_layout=0x4
> Output: srate:8000Hz fmt:s16 chlayout:mono
> +-----------+
> | in |default--[24000Hz s16:mono]--Parsed_aconvert_0:default
> | (abuffer) |
> +-----------+
>
> +-----------------+
> Parsed_aresample_1:default--[8000Hz s16:mono]--default| out |
> | (ffabuffersink) |
>
> +-----------------+
>
> +-------------------+
> in:default--[24000Hz s16:mono]--default| Parsed_aconvert_0 |default--[24000Hz s16:mono]-Parsed_aresample_1:default
> | (aconvert) |
> +-------------------+
>
>
> +--------------------+ Parsed_aconvert_0:default--[24000Hz s16:mono]--default| Parsed_aresample_1 |default--[8000Hz s16:mono]--out:default
> | (aresample) |
>
> +--------------------+

> [s16le @ 003edda0] Invalid sample rate 0 specified using default of
> 44100

This is fishy.

> [s16le @ 003edda0] Estimating duration from bitrate, this may be
> inaccurate Input #0, s16le, from 'pipe:':
> Duration: N/A, start: 0.000000, bitrate: 128 kb/s
> Stream #0:0: Audio: pcm_s16le, 8000 Hz, 1 channels, s16, 128 kb/s
>
> If you have made this example run properly on Windows in VS 2010, would you please provide any tips or changes you made for it to work?

I just tried latest git and seems to work fine here (Linux). Does the problem depend on the input file?
--
FFmpeg = Funny and Fierce Merciless Practical Extended Goblin _______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From stefasab at gmail.com Wed Nov 14 00:29:29 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Wed, 14 Nov 2012 00:29:29 +0100
Subject: [Libav-user] filtering_audio.c example not working
In-Reply-To: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82803@GODFATHERII.vaytek2003.local>
References: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>
<20121113221917.GL2615@arborea>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82803@GODFATHERII.vaytek2003.local>
Message-ID: <20121113232929.GN2615@arborea>

On date Tuesday 2012-11-13 16:26:13 -0600, Ron Woods wrote:
> I have tried it on AVI or MOV input file as well -- same result; so, no, it doesn't seem to depend on input file.
> I am using same filter technique in another context to transcode audio from input movie to WAV output file, and it works fine there.
> Maybe it is something to do with the piping to ffplay that is not working right in Windows?

Possible. Try to put the output in a file, and then read it with
ffplay. IIRC there was an issue with Windows and pipes (and I'm not
sure cmd.com really supports them), you may want to search in trac
tickets.

From rwoods at vaytek.com Wed Nov 14 04:17:21 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Tue, 13 Nov 2012 21:17:21 -0600
Subject: [Libav-user] filtering_audio.c example not working
In-Reply-To: <20121113232929.GN2615@arborea>
References: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF827EC@GODFATHERII.vaytek2003.local>
<20121113221917.GL2615@arborea>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82803@GODFATHERII.vaytek2003.local>
<20121113232929.GN2615@arborea>
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82804@GODFATHERII.vaytek2003.local>

Per your recommendation, I tried the following:
filtering_audio test.mp4 > test_audio
ffplay -f s16le -ar 8000 -ac 1 test_audio
but with no better luck than with:
filtering_audio test.mp4 | ffplay -f s16le -ar 8000 -ac 1 -

Where to search "trac tickets"?

-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Stefano Sabatini
Sent: Tuesday, November 13, 2012 5:29 PM
To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
Subject: Re: [Libav-user] filtering_audio.c example not working

On date Tuesday 2012-11-13 16:26:13 -0600, Ron Woods wrote:
> I have tried it on AVI or MOV input file as well -- same result; so, no, it doesn't seem to depend on input file.
> I am using same filter technique in another context to transcode audio from input movie to WAV output file, and it works fine there.
> Maybe it is something to do with the piping to ffplay that is not working right in Windows?

Possible. Try to put the output in a file, and then read it with ffplay. IIRC there was an issue with Windows and pipes (and I'm not sure cmd.com really supports them), you may want to search in trac tickets.
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From phamsyquybk at gmail.com Wed Nov 14 05:31:12 2012
From: phamsyquybk at gmail.com (Quy Pham Sy)
Date: Wed, 14 Nov 2012 13:31:12 +0900
Subject: [Libav-user] when to call av_register_all()
Message-ID: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>

Hi, i'm very new to this, so please bear with me if the question is stupid.

In my app, there will be more than one instance of decoder process that
decode multiple files at the same time, so there will be more than place
where i need to open and do all the initialization before open video files
and start decoding.

My question is that should i call av_register_all for each decoding
process? when and where should the method be called?

Thanks,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121114/8e1845c9/attachment.html>

From mrfun.china at gmail.com Wed Nov 14 05:53:21 2012
From: mrfun.china at gmail.com (FFMPEG)
Date: Wed, 14 Nov 2012 15:53:21 +1100
Subject: [Libav-user] How to solve 'container format requires global
headers' problem?
Message-ID: <A64B077F-873B-46CA-A504-D96A5C4A782B@gmail.com>

Hi guys,

I'm trying to use matroska container format for H.264 video and AAC audio.

When I run my program, there's warning 'Codec for stream 0 does not use global headers but container format requires global headers'

Anyone knows how to solve that? Thanks

From nicolas.george at normalesup.org Tue Nov 13 23:03:39 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Tue, 13 Nov 2012 23:03:39 +0100
Subject: [Libav-user] Displaying video in Windows
In-Reply-To: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
References: <B36E81988B048A4FA3600EDE585F756B028EE5D0@EXCH02-AKLNZ.MARINE.NET.INT>
Message-ID: <20121113220335.GA383@phare.normalesup.org>

Le tridi 23 brumaire, an CCXXI, Tom Isaacson a ?crit?:
> I've looked at the source for ffplay and can see that the video is
> displayed using SDL. But as far as I can tell this doesn't actually use a
> real window in Windows, it just creates a space on the screen then watches
> for mouse events in the area.

Do you experience the same problem with other, possibly more popular,
software based on SDL? Frozen Bubble, Dosbox, ScummVM, could be possible
candidate, although I am not sure whether any of them has a native windows
interface.

The thing is, despite the bad image it has for a lot of skillful developers,
SDL is widely used for cross-platform games, and it mostly does the work. If
it did not work on windows, I am pretty sure it would be common knowledge.

It is entirely possible that ffplay is doing something wrong with the SDL
API that causes it to have incomplete windows decorations. Or it is entirely
possible that something strange on your system causes SDL to misbehave.
Testing other SDL-based programs would help to know.

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121113/2a07d01f/attachment.asc>

From chihchianglu at gmail.com Tue Nov 13 03:01:40 2012
From: chihchianglu at gmail.com (Chih-Chiang Lu)
Date: Mon, 12 Nov 2012 18:01:40 -0800
Subject: [Libav-user] don't know how to get RTCP packet
Message-ID: <CAGxU4w_3K2mTeER+yLYO1N4MUctz-B4RUzPhf6EWReBb7XU_-w@mail.gmail.com>

Hi,

I'm using ffmpeg to retrieve the video and data from camera.
This camera sends video through rtsp and data through onvif.

I use "avformat_open_input" -> "avformat_find_stream_info" -> and get 2
streams from camera, and then I just use "av_read_frame" to do what I want
to do to these 2 streams.
Everything works as I expect.

However, I would also like to retrieve the RTCP packet in order to get the
ntp timestamp.
I have no idea how to do that.

Could you share some light?

Thanks,
Jeff
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121112/e2b99ccc/attachment.html>

From ryltsov at gmail.com Tue Nov 13 23:36:21 2012
From: ryltsov at gmail.com (Roman Ryltsov)
Date: Wed, 14 Nov 2012 00:36:21 +0200
Subject: [Libav-user] H.264 decoder shooting "no picture " and "Internal
error, picture buffer overflow"
Message-ID: <50A2CB65.2030000@roman.adenix.net>

Hello,

(sorry if it comes twice - the previous one was sent from wrong address)

I am seeing a problem with libavcodec's H.264 decoder, which I need some
help with.

I was using the codec to decode H.264 video in a straightforward way,
and at some point I realized that it fails to decode specific content
coming from x264 encoder. After some investigation I found that with all
the rest unchanged, if encoder applies a constraint and limits output to
be baseline compatible, things work perfectly. Should this be removed
and content is e.g. High at 3.1, or if the constraint it set to output Main
profile stream, the problem immediately comes up. I might suppose that
presence of B frames is the cause, however I have not yet checked that deep.

This "bad" input however, muxed into MP4 file, is played back well with
Windows Media Player, and Media Player Classic HC. Not VLC though. I had
a chance to pass it through another decoder in Windows and the content
was perfectly playable, which makes me think it is not that bad.

With libavcodec however I see multiple "no picture " log outputs, and
eventually fatal "Internal error, picture buffer overflow". Moreover,
the problem starts comes up immediately with the first video frame of
the stream, which contains SPS, PPS, IDR NALs and in my understanding is
supposed to be decodable immediately. Some frames are still decoded (and
without artifacts, though might be in wrong order), however the rate of
erroneous frames is pretty high (perhaps one half).

Finding this out, I could isolate it all into a few tens of lines of
code (+ embedded AVPacket data inline):

http://www.alax.info/trac/public/browser/trunk/Utilities/FFmpeg/NoPictureIssue/NoPictureIssue.cpp

I wonder if anyone is aware of this problem and there is any additional
information out there to work it around? This occurs with both Zeranoe
pre-built libraries, and those I built myself with MinGW.

This stream is Annex B formatted, without global headers (extradata),
could this be a problem and/or it is safer or otherwise recommended to
convert to length prefixes? I looked into code and I see that there is
support for both.

Any specific initialization I am missing? Since all the data is
basically one packet with a few NALs, the sequence of calls is as simple as:

av_lockmgr_register(...)
avcodec_register_all()
av_log_set_level(AV_LOG_DEBUG)
avcodec_alloc_context()
avcodec_find_decoder(CODEC_ID_H264)
avcodec_open(...)
av_init_packet(...)
avcodec_decode_video2(...)

with consumed data in avcodec_decode_video2, error logged, and no frame
availability on the output.

Thanks,
Roman



From nicolas.george at normalesup.org Wed Nov 14 09:46:22 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Wed, 14 Nov 2012 09:46:22 +0100
Subject: [Libav-user] How to solve 'container format requires global
headers' problem?
In-Reply-To: <A64B077F-873B-46CA-A504-D96A5C4A782B@gmail.com>
References: <A64B077F-873B-46CA-A504-D96A5C4A782B@gmail.com>
Message-ID: <20121114084622.GB4877@phare.normalesup.org>

Le quartidi 24 brumaire, an CCXXI, FFMPEG a ?crit?:
> When I run my program, there's warning 'Codec for stream 0 does not use
> global headers but container format requires global headers'

Look in doc/examples/muxing.c and search for "global" in it.

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121114/dcb124d4/attachment.asc>

From Hannes.Wuerfel at student.hpi.uni-potsdam.de Wed Nov 14 12:43:04 2012
From: Hannes.Wuerfel at student.hpi.uni-potsdam.de (Wuerfel, Hannes)
Date: Wed, 14 Nov 2012 12:43:04 +0100
Subject: [Libav-user] demuxing streams and mux them back to file
Message-ID: <86E5A0E93B7D374B85FEA49458759233326A12CB21@8mxstu1r.hpi.uni-potsdam.de>

Hey all,

I'm trying to write a simple program which decodes the video and audio frames and encodes and mux it back to a file.
How can I generally accomplish that?

decoding and encoding the videostream for itself works.
decoding and encoding the audiostream for itself works.
(as shown in the decoding_encoding.c example)

Basically I took the demuxing.c example and wanted to mux the video and audio frames back together but I'm stucked for days on this.

I've added an output AVFormatContext and an out AVStream for video and an out AVStream for audio.
I've tried to copy the input Codec settings.
Further I used avio_open to open the file,
used avformat_write_header to write the stream headers
and after decoding I use av_write_trailer
and avio_close

Is this basically write or do I miss something important?

This is how I tried to copy the input Codec settings to the output Codec settings:

static AVStream *add_stream(AVFormatContext *oc, const AVCodec *inCodec,
AVCodec** outCodec, enum AVCodecID codec_id)
{
AVCodecContext *c;
AVStream *st;

/* find the encoder */
*outCodec = avcodec_find_encoder(codec_id);
if (!(*outCodec)) {
fprintf(stderr, "Could not find codec\n");
exit(1);
}

st = avformat_new_stream(oc, *outCodec);
if (!st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
st->id = oc->nb_streams-1;
c = st->codec;

switch ((*outCodec)->type) {
case AVMEDIA_TYPE_AUDIO:
st->id = 1;
c->sample_fmt = audio_stream->codec->sample_fmt;
c->bit_rate = audio_stream->codec->bit_rate;
c->sample_rate = audio_stream->codec->sample_rate;
c->channels = audio_stream->codec->channels;
break;
case AVMEDIA_TYPE_VIDEO:
avcodec_get_context_defaults3(c, inCodec);
c->codec_id = video_stream->codec->codec_id;

c->bit_rate = video_stream->codec->bit_rate;
c->width = video_stream->codec->width;
c->height = video_stream->codec->height;
c->time_base.den = video_stream->codec->time_base.den;
c->time_base.num = video_stream->codec->time_base.num;
c->gop_size = video_stream->codec->gop_size;
c->pix_fmt = video_stream->codec->pix_fmt;
c->max_b_frames = video_stream->codec->max_b_frames;
c->mb_decision = video_stream->codec->mb_decision;
break;
default:
break;
}

/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;

return st;
}

Thanks.

From cehoyos at ag.or.at Wed Nov 14 13:01:32 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Wed, 14 Nov 2012 12:01:32 +0000 (UTC)
Subject: [Libav-user] demuxing streams and mux them back to file
References: <86E5A0E93B7D374B85FEA49458759233326A12CB21@8mxstu1r.hpi.uni-potsdam.de>
Message-ID: <loom.20121114T130029-848@post.gmane.org>

Wuerfel, Hannes <Hannes.Wuerfel at ...> writes:

> I'm trying to write a simple program which decodes the
> video and audio frames and encodes and mux it back to a file.
> How can I generally accomplish that?

> Basically I took the demuxing.c example and wanted to mux
> the video and audio frames back together but I'm
> stucked for days on this.

Did you also look at / test the muxing.c example?

Carl Eugen


From rwoods at vaytek.com Wed Nov 14 16:55:02 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Wed, 14 Nov 2012 09:55:02 -0600
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>

Call av_register_all() only at the beginning of your main program, not before each open file; but, depending on what you are doing, you may also need to call avcodec_register_all() and/or avfilter_register_all(). Refer to the documentation if you?re not sure what each does.

From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Quy Pham Sy
Sent: Tuesday, November 13, 2012 10:31 PM
To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
Subject: [Libav-user] when to call av_register_all()

Hi, i'm very new to this, so please bear with me if the question is stupid.

In my app, there will be more than one instance of decoder process that decode multiple files at the same time, so there will be more than place where i need to open and do all the initialization before open video files and start decoding.

My question is that should i call av_register_all for each decoding process? when and where should the method be called?

Thanks,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121114/d0e540ab/attachment.html>

From Hannes.Wuerfel at student.hpi.uni-potsdam.de Wed Nov 14 21:37:38 2012
From: Hannes.Wuerfel at student.hpi.uni-potsdam.de (Wuerfel, Hannes)
Date: Wed, 14 Nov 2012 21:37:38 +0100
Subject: [Libav-user] demuxing streams and mux them back to file
In-Reply-To: <loom.20121114T130029-848@post.gmane.org>
References: <86E5A0E93B7D374B85FEA49458759233326A12CB21@8mxstu1r.hpi.uni-potsdam.de>,
<loom.20121114T130029-848@post.gmane.org>
Message-ID: <86E5A0E93B7D374B85FEA49458759233326A12CB23@8mxstu1r.hpi.uni-potsdam.de>

Hey Carl,

thank you for your hint! With that I was able to create a video stream, an audio stream and muxed them into a file.
But I guess I was too naive to think that I could decode the frames and encode them back to file as they come in without setting new presentation time stamps and so on.

The difference between the muxing.c example and my program is that I don't do something like this:

if (frame)
frame->pts = 0;
for (;;) {
/* Compute current audio and video time. */
if (audio_st)
audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
else
audio_pts = 0.0;

if (video_st)
video_pts = (double)video_st->pts.val * video_st->time_base.num /
video_st->time_base.den;
else
video_pts = 0.0;

if ((!audio_st || audio_pts >= STREAM_DURATION) &&
(!video_st || video_pts >= STREAM_DURATION))
break;

/* write interleaved audio and video frames */
if (!video_st || (video_st && audio_st && audio_pts < video_pts)) {
write_audio_frame(oc, audio_st);
} else {
write_video_frame(oc, video_st);
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
}
}

in pseudo code what i am doing (i don't use any buffer queues for frames or packages, I simply want write them back as they come):

while(getFrame)
video?
decodeVideo, encode the decoded avframe back to the destination file
audio?
decodeAudio, encode the decoded avframe back to the destination file

Should it work out of the box if I insert an .mp4 file with an h264 codec and an acc codec and write it back to an .mp4 file?
It does not for me:

[libx264 @ 0000000000e31e00] MB rate (36000000) > level limit (2073600)
[libx264 @ 0000000000e31e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX
[libx264 @ 0000000000e31e00] profile High, level 5.2
[libx264 @ 0000000000e31e00] 264 - core 128 r2216 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: caba
c=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11
fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=
0 weightp=2 keyint=12 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=3091 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=
4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'outMuxed.mp4':
Stream #0:0: Video: h264, yuv420p, 640x480, q=-1--1, 3091 kb/s, 90k tbn, 30k tbc
Stream #0:1: Audio: aac, 48000 Hz, 1 channels, s16, 96 kb/s
Video processing started ...
[mp4 @ 0000000000ecd300] Encoder did not produce proper pts, making some up.
[libx264 @ 0000000000e31e00] specified frame type (3) at 12 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 13 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 14 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 15 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 16 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 17 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 18 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 19 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 20 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 21 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 22 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 23 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 24 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 25 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 26 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 27 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 28 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 29 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 42 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 43 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 44 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 45 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 46 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 47 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 48 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 49 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 50 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 51 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 52 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 53 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 54 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 55 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 56 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 57 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 58 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 59 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 72 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 73 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 74 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 75 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 76 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 77 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 78 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 79 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 80 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 81 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 82 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 83 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 84 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 85 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 86 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 87 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 88 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 89 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 102 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 103 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 104 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 105 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 106 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 107 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 108 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 109 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 110 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 111 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 112 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 113 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 114 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 115 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 116 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 117 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 118 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 119 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 132 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 133 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 134 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 135 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 136 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 137 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 138 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 139 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 140 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 141 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 142 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 143 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 144 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 145 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 146 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 147 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 148 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 149 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 162 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 163 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] non-strictly-monotonic PTS
[libx264 @ 0000000000e31e00] specified frame type (3) at 164 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 165 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 166 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 167 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 168 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 169 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 170 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 171 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 172 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 173 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 174 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 175 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 176 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 177 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 178 is not compatible with keyframe interval
[libx264 @ 0000000000e31e00] specified frame type (3) at 179 is not compatible with keyframe interval
[mp4 @ 0000000000ecd300] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 184000 >= 184000
Error while writing video frame


________________________________________
Von: libav-user-bounces at ffmpeg.org [libav-user-bounces at ffmpeg.org] im Auftrag von Carl Eugen Hoyos [cehoyos at ag.or.at]
Gesendet: Mittwoch, 14. November 2012 13:01
An: libav-user at ffmpeg.org
Betreff: Re: [Libav-user] demuxing streams and mux them back to file

Wuerfel, Hannes <Hannes.Wuerfel at ...> writes:

> I'm trying to write a simple program which decodes the
> video and audio frames and encodes and mux it back to a file.
> How can I generally accomplish that?

> Basically I took the demuxing.c example and wanted to mux
> the video and audio frames back together but I'm
> stucked for days on this.

Did you also look at / test the muxing.c example?

Carl Eugen

_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From phamsyquybk at gmail.com Thu Nov 15 01:37:57 2012
From: phamsyquybk at gmail.com (Quy Pham Sy)
Date: Thu, 15 Nov 2012 09:37:57 +0900
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
Message-ID: <CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>

do i need to unregister when there is no need of ffmpeg or before the
program exit?


On Thu, Nov 15, 2012 at 12:55 AM, Ron Woods <rwoods at vaytek.com> wrote:

> Call av_register_all() only at the beginning of your main program, not
> before each open file; but, depending on what you are doing, you may also
> need to call avcodec_register_all() and/or avfilter_register_all(). Refer
> to the documentation if you?re not sure what each does.****
>
> ** **
>
> *From:* libav-user-bounces at ffmpeg.org [mailto:
> libav-user-bounces at ffmpeg.org] *On Behalf Of *Quy Pham Sy
> *Sent:* Tuesday, November 13, 2012 10:31 PM
> *To:* This list is about using libavcodec, libavformat, libavutil,
> libavdevice and libavfilter.
> *Subject:* [Libav-user] when to call av_register_all()****
>
> ** **
>
> Hi, i'm very new to this, so please bear with me if the question is stupid.
> ****
>
> ** **
>
> In my app, there will be more than one instance of decoder process that
> decode multiple files at the same time, so there will be more than place
> where i need to open and do all the initialization before open video files
> and start decoding.****
>
> ** **
>
> My question is that should i call av_register_all for each decoding
> process? when and where should the method be called? ****
>
> ** **
>
> Thanks,****
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/a54f95c0/attachment.html>

From mrfun.china at gmail.com Thu Nov 15 04:17:44 2012
From: mrfun.china at gmail.com (FFMPEG)
Date: Thu, 15 Nov 2012 14:17:44 +1100
Subject: [Libav-user] Can't find encoder
Message-ID: <AD653E50-C62A-49F0-B6B7-17328965239C@gmail.com>

Hi,

My avcodec_find_encoder(CODEC_ID_THEORA) returns null ptr

Is there anywhere I can check why this encoder is not supported?

Thanks!

From adrozdoff at gmail.com Thu Nov 15 05:51:09 2012
From: adrozdoff at gmail.com (hatred)
Date: Thu, 15 Nov 2012 15:51:09 +1100
Subject: [Libav-user] How to correctly split AVFrame with audio samples on
encoding
Message-ID: <CAP9ViodJLrw10c-mikOmuGzx-f31PTr_ZPa4ggvU-rqLHWvUiQ@mail.gmail.com>

Hi List,

I write live transcoder for our IP-cameras, in some cases audio straem also
need to transcoder.
But I have a problem with non-equal frame_size in input codec context and
output codec context:
1. I read audio packet
2. I decode it with avcodec_decode_audio4()
It provides AVFrame's with nb_samples = 1152, but when I open output codec
it have frame_size = 576 and when I encode audio frame I take error message:
more samples than frame size (avcodec_encode_audio2)

I think that I must split one AVFrame into two or more and encode this
resulting frames. Is it valid way? Do I must do it manualy with
av_samples_copy? How I can recalculate PTS for resulting frames?

Thanks for answers.

PS output format is FLV with default video and audio codecs.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/2a22c4a9/attachment.html>

From alan.3976 at gmail.com Thu Nov 15 07:01:52 2012
From: alan.3976 at gmail.com (Alan)
Date: Thu, 15 Nov 2012 14:01:52 +0800
Subject: [Libav-user] Can't find encoder
In-Reply-To: <AD653E50-C62A-49F0-B6B7-17328965239C@gmail.com>
References: <AD653E50-C62A-49F0-B6B7-17328965239C@gmail.com>
Message-ID: <5990058.hlZQJmOvDo@linux-m1n0.site>

There is noting else we can do to help you out, since you provide little
valuable information.

OK, firstly make sure that you already registered the codecs.

check the API avcodec_register_all()



On Thursday, November 15, 2012 02:17:44 PM FFMPEG wrote:
> Hi,
>
> My avcodec_find_encoder(CODEC_ID_THEORA) returns null ptr
>
> Is there anywhere I can check why this encoder is not supported?
>
> Thanks!
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user

From cehoyos at ag.or.at Thu Nov 15 09:47:17 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 15 Nov 2012 08:47:17 +0000 (UTC)
Subject: [Libav-user] Can't find encoder
References: <AD653E50-C62A-49F0-B6B7-17328965239C@gmail.com>
Message-ID: <loom.20121115T094636-779@post.gmane.org>

FFMPEG <mrfun.china at ...> writes:

> My avcodec_find_encoder(CODEC_ID_THEORA) returns null ptr
>
> Is there anywhere I can check why this encoder is not supported?

You could carefully search your configure line for "--enable-libtheora".

Carl Eugen


From chengang5 at staff.sina.com.cn Thu Nov 15 11:49:04 2012
From: chengang5 at staff.sina.com.cn (=?GB2312?B?s8IguNY=?=)
Date: Thu, 15 Nov 2012 18:49:04 +0800
Subject: [Libav-user] [HELP]mpeg-ts video and audio out of sync,
convert from flv with avformat
Message-ID: <4E1F16A8-6A28-4099-9991-21ED7CB7103D@staff.sina.com.cn>

Because i need play my videos on iPad, I wrote a nginx module which convert flv fragment to mpeg-ts fragment.

But there is a problem, as the video plays the video and audio more and more out of sync.

After 30min, it is out of sync very significant.

I have been working for that several days on this issue.

I need help.

Below is part of my code:

if(packet.stream_index==video_index){

if (packet.pts != (u_int)AV_NOPTS_VALUE)
{
packet.pts = av_rescale_q(packet.pts,AV_TIME_BASE_NEW_Q,video_stream->time_base);
}
if (packet.dts != (u_int)AV_NOPTS_VALUE)
{
packet.dts = av_rescale_q(packet.dts,AV_TIME_BASE_NEW_Q,video_stream->time_base);
}

if(video_bitstream_filters!=NULL)
{
ret = write_frame(oc,&packet,video_stream->codec,video_bitstream_filters);
if (ret < 0)
{
fprintf(stderr, "Error:write_frame[video] error: %d\n", ret);
}
}else{

ret = av_interleaved_write_frame(oc, &packet);
if (ret < 0)
{
fprintf(stderr, "Error:av_interleaved_write_frame[video] error: %d\n", ret);
}
}
}else if(packet.stream_index==audio_index){

if (packet.pts != (u_int)AV_NOPTS_VALUE)
{
packet.pts = av_rescale_q(packet.pts,audio_stream->codec->time_base,audio_stream->time_base);
}
if (packet.dts != (u_int)AV_NOPTS_VALUE)
packet.dts = av_rescale_q(packet.dts,audio_stream->codec->time_base,audio_stream->time_base);
if (packet.duration > 0)
packet.duration = av_rescale_q(packet.duration,audio_stream->codec->time_base,audio_stream->time_base);
ret = av_interleaved_write_frame(oc, &packet);
if (ret < 0)
{
fprintf(stderr, "Error:av_interleaved_write_frame[audio] error: %d\n", ret);
}
}

From mrfun.china at gmail.com Thu Nov 15 12:56:47 2012
From: mrfun.china at gmail.com (YIRAN LI)
Date: Thu, 15 Nov 2012 22:56:47 +1100
Subject: [Libav-user] Can't find encoder
In-Reply-To: <loom.20121115T094636-779@post.gmane.org>
References: <AD653E50-C62A-49F0-B6B7-17328965239C@gmail.com>
<loom.20121115T094636-779@post.gmane.org>
Message-ID: <CAN16yyNuRajJHGY08=PD+j4cSP9bnSDgEzdtMy=zXxHmdAzPwQ@mail.gmail.com>

Yes,

I'm working on Windows, and I think the lib I use now was compiled with
libtheora disabled.

I'll check how to include that lib in.

Thanks

2012/11/15 Carl Eugen Hoyos <cehoyos at ag.or.at>

> FFMPEG <mrfun.china at ...> writes:
>
> > My avcodec_find_encoder(CODEC_ID_THEORA) returns null ptr
> >
> > Is there anywhere I can check why this encoder is not supported?
>
> You could carefully search your configure line for "--enable-libtheora".
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/1009a651/attachment.html>

From nicolas.george at normalesup.org Thu Nov 15 13:18:17 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Thu, 15 Nov 2012 13:18:17 +0100
Subject: [Libav-user] How to correctly split AVFrame with audio samples
on encoding
In-Reply-To: <CAP9ViodJLrw10c-mikOmuGzx-f31PTr_ZPa4ggvU-rqLHWvUiQ@mail.gmail.com>
References: <CAP9ViodJLrw10c-mikOmuGzx-f31PTr_ZPa4ggvU-rqLHWvUiQ@mail.gmail.com>
Message-ID: <20121115121817.GA17792@phare.normalesup.org>

Le quintidi 25 brumaire, an CCXXI, hatred a ?crit?:
> It provides AVFrame's with nb_samples = 1152, but when I open output codec
> it have frame_size = 576 and when I encode audio frame I take error message:
> more samples than frame size (avcodec_encode_audio2)
>
> I think that I must split one AVFrame into two or more and encode this
> resulting frames. Is it valid way? Do I must do it manualy with
> av_samples_copy?

Yes. This is clumsy but currently unavoidable. A simpler API was suggested
at some point, but no one did anything for it.

You could use a filter to do the work, but it would probably be overkill.

> How I can recalculate PTS for resulting frames?

A simple extrapolation based on the sample rate versus time base, using
av_rescale_q. If the time base is the same as the sample rate, it is even
easier.

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/2c933eb4/attachment.asc>

From rwoods at vaytek.com Thu Nov 15 16:42:29 2012
From: rwoods at vaytek.com (Ron Woods)
Date: Thu, 15 Nov 2012 09:42:29 -0600
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
<CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>
Message-ID: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF8281B@GODFATHERII.vaytek2003.local>

No, but you do need to close or free any ffmpeg objects that you created with, e.g., avcodec_close(), avfilter_graph_free(), av_free(), etc.

From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Quy Pham Sy
Sent: Wednesday, November 14, 2012 6:38 PM
To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
Subject: Re: [Libav-user] when to call av_register_all()

do i need to unregister when there is no need of ffmpeg or before the program exit?

On Thu, Nov 15, 2012 at 12:55 AM, Ron Woods <rwoods at vaytek.com<mailto:rwoods at vaytek.com>> wrote:
Call av_register_all() only at the beginning of your main program, not before each open file; but, depending on what you are doing, you may also need to call avcodec_register_all() and/or avfilter_register_all(). Refer to the documentation if you?re not sure what each does.

From: libav-user-bounces at ffmpeg.org<mailto:libav-user-bounces at ffmpeg.org> [mailto:libav-user-bounces at ffmpeg.org<mailto:libav-user-bounces at ffmpeg.org>] On Behalf Of Quy Pham Sy
Sent: Tuesday, November 13, 2012 10:31 PM
To: This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter.
Subject: [Libav-user] when to call av_register_all()

Hi, i'm very new to this, so please bear with me if the question is stupid.

In my app, there will be more than one instance of decoder process that decode multiple files at the same time, so there will be more than place where i need to open and do all the initialization before open video files and start decoding.

My question is that should i call av_register_all for each decoding process? when and where should the method be called?

Thanks,

_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org<mailto:Libav-user at ffmpeg.org>
http://ffmpeg.org/mailman/listinfo/libav-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/98bcf223/attachment.html>

From nicolas.george at normalesup.org Thu Nov 15 19:05:54 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Thu, 15 Nov 2012 19:05:54 +0100
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <27FB82F7EB0DA04C92361A01E7BA9A9301A71DF8281B@GODFATHERII.vaytek2003.local>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
<CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF8281B@GODFATHERII.vaytek2003.local>
Message-ID: <20121115180554.GA25126@phare.normalesup.org>

Le quintidi 25 brumaire, an CCXXI, Ron Woods a ?crit?:
> No, but you do need to close or free any ffmpeg objects that you created
> with, e.g., avcodec_close(), avfilter_graph_free(), av_free(), etc.

Why would he *need* to? Freeing everything that was allocated is a good
practice, and absolutely necessary for a library, but for an application,
doing it just before exit is not a necessity at all.

> From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Quy Pham Sy

Please remember that top-posting is not acceptable on this list.

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/b4242a78/attachment.asc>

From ryltsov at gmail.com Thu Nov 15 19:10:14 2012
From: ryltsov at gmail.com (Roman Ryltsov)
Date: Thu, 15 Nov 2012 20:10:14 +0200
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <20121115180554.GA25126@phare.normalesup.org>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
<CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF8281B@GODFATHERII.vaytek2003.local>
<20121115180554.GA25126@phare.normalesup.org>
Message-ID: <CAE9ZOGGnQW3o5GgAaibkUencXr6J-YMos2MSjcFJQ-zU9ombZg@mail.gmail.com>

Because you never know (unless you have intimate knowledge of internals)
whether it is just freeing the resources in use, or it also flushes data
and takes certain completing steps which are absolutely necessary. Graceful
closing is always a good thing to do, FFmpeg is hardly an exception.

Roman

On Thu, Nov 15, 2012 at 8:05 PM, Nicolas George <
nicolas.george at normalesup.org> wrote:

> Le quintidi 25 brumaire, an CCXXI, Ron Woods a ?crit :
> > No, but you do need to close or free any ffmpeg objects that you created
> > with, e.g., avcodec_close(), avfilter_graph_free(), av_free(), etc.
>
> Why would he *need* to? Freeing everything that was allocated is a good
> practice, and absolutely necessary for a library, but for an application,
> doing it just before exit is not a necessity at all.
>
> > From: libav-user-bounces at ffmpeg.org [mailto:
> libav-user-bounces at ffmpeg.org] On Behalf Of Quy Pham Sy
>
> Please remember that top-posting is not acceptable on this list.
>
> Regards,
>
> --
> Nicolas George
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
>
> iEYEARECAAYFAlClLwIACgkQsGPZlzblTJPHCACfSPUzY5HC5tJQt8cZIrnF0xDt
> 3FgAnjzauFPt8rvCIRAVjfGBWx8gmxLe
> =EdmG
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/979451e1/attachment.html>

From nicolas.george at normalesup.org Thu Nov 15 20:01:03 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Thu, 15 Nov 2012 20:01:03 +0100
Subject: [Libav-user] when to call av_register_all()
In-Reply-To: <CAE9ZOGGnQW3o5GgAaibkUencXr6J-YMos2MSjcFJQ-zU9ombZg@mail.gmail.com>
References: <CAEEaHNJXHtpobNnU3HYGz-AR6offrv4=USsFJAVgCMGnBxoBRQ@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF82808@GODFATHERII.vaytek2003.local>
<CAEEaHNLj9eywu_EF6P5tne54_-uQcerbzMDPEYXUneU2Z_+MBw@mail.gmail.com>
<27FB82F7EB0DA04C92361A01E7BA9A9301A71DF8281B@GODFATHERII.vaytek2003.local>
<20121115180554.GA25126@phare.normalesup.org>
<CAE9ZOGGnQW3o5GgAaibkUencXr6J-YMos2MSjcFJQ-zU9ombZg@mail.gmail.com>
Message-ID: <20121115190103.GA8730@phare.normalesup.org>

Le quintidi 25 brumaire, an CCXXI, Roman Ryltsov a ?crit?:
> Because you never know (unless you have intimate knowledge of internals)
> whether it is just freeing the resources in use, or it also flushes data
> and takes certain completing steps which are absolutely necessary. Graceful

You can also read the docs. They usually specify it, and if they do not,
they should. The name of the function could be a hint too: av_free() is
unlikely to perform any flushing.

> closing is always a good thing to do, FFmpeg is hardly an exception.

"good" != "necessary"; I already acknowledged that it was good practice, but
I stand on my point that it is not absolutely necessary.

> On Thu, Nov 15, 2012 at 8:05 PM, Nicolas George <
> nicolas.george-t+5nXNeJE7o5viHyz3+zKA at public.gmane.org> wrote:

Please do not top-post here (bis).

Regards,

--
Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121115/f6dc045f/attachment.asc>

From adrozdoff at gmail.com Fri Nov 16 01:51:08 2012
From: adrozdoff at gmail.com (hatred)
Date: Fri, 16 Nov 2012 11:51:08 +1100
Subject: [Libav-user] How to correctly split AVFrame with audio samples
on encoding
In-Reply-To: <20121115121817.GA17792@phare.normalesup.org>
References: <CAP9ViodJLrw10c-mikOmuGzx-f31PTr_ZPa4ggvU-rqLHWvUiQ@mail.gmail.com>
<20121115121817.GA17792@phare.normalesup.org>
Message-ID: <CAP9ViocMgO5hy6GAhqc2-nF1N8Xt=WK4dTw_6MBzx2Qn3BO0yQ@mail.gmail.com>

Nicolas, thanks for answer. Currently I implement this solution but I have
a warning message:
Que input is backward in time

but I think I will resolve it :-)

2012/11/15 Nicolas George <nicolas.george at normalesup.org>

> Le quintidi 25 brumaire, an CCXXI, hatred a ?crit :
> > I think that I must split one AVFrame into two or more and encode this
> > resulting frames. Is it valid way? Do I must do it manualy with
> > av_samples_copy?
>
> Yes. This is clumsy but currently unavoidable. A simpler API was suggested
> at some point, but no one did anything for it.
>
> You could use a filter to do the work, but it would probably be overkill.
>
> > How I can recalculate PTS for resulting frames?
>
> A simple extrapolation based on the sample rate versus time base, using
> av_rescale_q. If the time base is the same as the sample rate, it is even
> easier.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121116/929c195c/attachment.html>

From zhou.wenpeng at rdnet.fi Fri Nov 16 08:58:24 2012
From: zhou.wenpeng at rdnet.fi (Wenpeng Zhou)
Date: Fri, 16 Nov 2012 09:58:24 +0200
Subject: [Libav-user] [mjpeg @ 009ceb80] Provided packet is too small,
needs to be 8163
Message-ID: <004201cdc3d0$292a62f0$7b7f28d0$@rdnet.fi>

Hi,

When I try to decode and encode frames from live streams, I got this error.

How to fix it ?



From mjbshaw at gmail.com Sat Nov 17 02:22:39 2012
From: mjbshaw at gmail.com (Michael Bradshaw)
Date: Fri, 16 Nov 2012 18:22:39 -0700
Subject: [Libav-user] Detecting image orientation in pictures from a camera?
Message-ID: <CAEhrai35_OVjc00fO3GxxhgX41NJnDsCZy54m8cSpBphUE-aow@mail.gmail.com>

I've got a series of images taken with a camera that I need to process with
libav*. Is there any way to check the EXIF data to determine
rotation/orientation for the image (so that I can rotate it in my program)?

When I print out the metadata for the AVFormatContext, AVStream, and
AVFrame, they're all empty, and beyond that I'm not sure where to look.

Thanks,

Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121116/9df8db97/attachment.html>

From cehoyos at ag.or.at Sat Nov 17 11:41:08 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Sat, 17 Nov 2012 10:41:08 +0000 (UTC)
Subject: [Libav-user] Detecting image orientation in pictures from a
camera?
References: <CAEhrai35_OVjc00fO3GxxhgX41NJnDsCZy54m8cSpBphUE-aow@mail.gmail.com>
Message-ID: <loom.20121117T114028-972@post.gmane.org>

Michael Bradshaw <mjbshaw at ...> writes:

> I've got a series of images taken with a camera that
> I need to process with libav*. Is there any way to check
> the EXIF data to determine rotation/orientation for the
> image (so that I can rotate it in my program)?
>
> When I print out the metadata for the AVFormatContext,
> AVStream, and AVFrame, they're all empty, and beyond
> that I'm not sure where to look.

Could you prepare a small test-case for trac?

Thank you, Carl Eugen


From mjbshaw at gmail.com Sat Nov 17 17:07:12 2012
From: mjbshaw at gmail.com (Michael Bradshaw)
Date: Sat, 17 Nov 2012 09:07:12 -0700
Subject: [Libav-user] Detecting image orientation in pictures from a
camera?
In-Reply-To: <loom.20121117T114028-972@post.gmane.org>
References: <CAEhrai35_OVjc00fO3GxxhgX41NJnDsCZy54m8cSpBphUE-aow@mail.gmail.com>
<loom.20121117T114028-972@post.gmane.org>
Message-ID: <CAEhrai2cpJTQe4ScS9XmSL2z4RxETJ7_FYdqsrrngbPiQJO7_Q@mail.gmail.com>

On Sat, Nov 17, 2012 at 3:41 AM, Carl Eugen Hoyos <cehoyos at ag.or.at> wrote:

> Michael Bradshaw <mjbshaw at ...> writes:
>
> > I've got a series of images taken with a camera that
> > I need to process with libav*. Is there any way to check
> > the EXIF data to determine rotation/orientation for the
> > image (so that I can rotate it in my program)?
> >
> > When I print out the metadata for the AVFormatContext,
> > AVStream, and AVFrame, they're all empty, and beyond
> > that I'm not sure where to look.
>
> Could you prepare a small test-case for trac?


Sure, I'll make one now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121117/e60f81b8/attachment.html>

From elkrystus at gmail.com Mon Nov 19 11:21:52 2012
From: elkrystus at gmail.com (elkrystus)
Date: Mon, 19 Nov 2012 02:21:52 -0800 (PST)
Subject: [Libav-user] Merging two .mp4 files.
Message-ID: <1353320512233-4656062.post@n4.nabble.com>

Hi,

I want to merge two .mp4 files without reencoding them.
And I want to do it on android phone.

So what have I done to this point.
1) I have compiled ffmpeg with x264 support for android as shared library.
2) I've managed to merge two video files with identical codec settings( both
from android camera) by reading all AVPackets from both files and writing
them to output file without decoding/encoding.
3) I've managed to encode a .jpeg photo as h264 video, which on it's own
plays just fine.

What didnt work is:
When I try to merge video made by android with the one that I have made from
the photo, then it fails.
Only one part of output video is visible in player or in some cases it
doesnt play at all.
I'm trying to match codecs as closely as I can, but I am never able to match
them perfectly.

I have tried avcodec_copy_context() but it doesnt seem to be a solution.
I have also tried to use h264Context or x264Context but there are always
problems with includes and when program reaches x264context declaration I
get fatal signal 11 error.


Is it possible to put two h264 streams encoded with different setting into
one mp4 file and somehow tell decoder to decode them separately?



Do you have any ideas what am I doing wrong or what can I do to make it
work?

All help is appreciated.



--
View this message in context: http://libav-users.943685.n4.nabble.com/Merging-two-mp4-files-tp4656062.html
Sent from the libav-users mailing list archive at Nabble.com.

From mc at enciris.com Mon Nov 19 18:36:03 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Mon, 19 Nov 2012 12:36:03 -0500
Subject: [Libav-user] ffmpeg H.264 decoding
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF3101@VMBX116.ihostexchange.net>


Hi,

I managed to get an ffmpeg based H.264 elementary stream decoder work.
It uses following synthax to decode one frame:
...
codec = avcodec_find_decoder(CODEC_ID_H264);
..
len = avcodec_decode_video2(c, picture, &got_picture, &pkt);

My question:
The decoding goes very fast and CPU usage is low.
Can I guaranty same performance on similar CPU's?
Is any Hardware acceleration (mmx, GPU, DXVA, etc) used to decode the frames?
What is done behind the scene? Is there any buffering?
Thanks,

Malik

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121119/28dde80c/attachment.html>

From nkipe at tatapowersed.com Tue Nov 20 04:22:09 2012
From: nkipe at tatapowersed.com (Navin)
Date: Tue, 20 Nov 2012 08:52:09 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
Message-ID: <50AAF761.30704@tatapowersed.com>

Hi! Glad to be part of this mailing list.
What I wanted to create, was a program which would receive a streaming
video, and when it decodes a frame of the video into either a bitmap
format or just pure RGB (perhaps stored in a char array), it would
notify another program that it has received a frame, and the other
program would take the RGB values and display it.
I've already asked this question here:
http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=805
and rogerdpack told me to post my question on the libav mailing list.
I have been through many websites, but they either use img_convert
(which doesn't work) or sws_scale, which crashes when I try to use it
with RGB.
Could anyone help with a complete piece of code which can give me the
RGB values of a frame?

This is a part of the YUV conversion that I tried initially.

i=0;
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

// Did we get a video frame?
if(frameFinished)
{
// Convert the image into YUV format that SDL uses
sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );


--
Nav


From amir.rouhi at rmit.edu.au Tue Nov 20 07:18:01 2012
From: amir.rouhi at rmit.edu.au (Rouhi Amirhossein2)
Date: Tue, 20 Nov 2012 17:18:01 +1100
Subject: [Libav-user] Mapping process functions
Message-ID: <CAJAZ5pYFF7Ki7FAgpLy_e5gj6XPs_9iwiHhJGNgX7ngvrPV9Tg@mail.gmail.com>

Hi
Do you know that is it possible to select those parts of the ffmpeg program
that perform mapping (create intra-prediction modes) and combine them
together in a way that it become like a program that accept an image as
input and its output is a text file containing intra-prediction modes? If
your answer is yes, can you do that?
Cheers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121120/1e484f46/attachment.html>

From nkipe at tatapowersed.com Tue Nov 20 09:49:20 2012
From: nkipe at tatapowersed.com (Navin)
Date: Tue, 20 Nov 2012 14:19:20 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AAF761.30704@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
Message-ID: <50AB4410.1040205@tatapowersed.com>

Been trying these techniques, but I keep getting crashes. Help please?:

avpicture_fill((AVPicture*) frame2, frame2_buffer,
PIX_FMT_RGB24, width2, height2);
//printf("data = %d, %d, %d, %d
\n",frame2->data[0],frame2->data[1],frame2->data[2],frame2->data[3]);
//printf("linesize = %d %d %d\n",frame2->linesize[0],
frame2->linesize[1], frame2->linesize[2]);
//printf("width = %d\n", pCodecCtx->width);
//printf("height = %d\n", pCodecCtx->height);
//std::cin.get();

int linesize = frame2->linesize[0];
for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
{
int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
int g = frame2->data[0][xx+1];
int b = frame2->data[0][xx+2];
printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
g, b);
}
printf("frame%d done----------------",i++);
//for(int xx = 0; xx < width1; xx = xx + 3)
//{
// for(int yy = 0; yy < height1; ++yy)
// {
// //int p = xx*3 + yy*frame2->linesize[0];
// //int p = xx * 3 + yy * linesize;
// printf("yy=%d xx=%d",yy,xx);
// int p = yy * linesize + xx;
// printf("p=%d\n",p);
// int r = frame2->data[0][p];
// int g = frame2->data[0][p+1];
// int b = frame2->data[0][p+2];
// printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
// }//for
//}//for

Nav

On 11/20/2012 8:52 AM, Nav wrote:
> Hi! Glad to be part of this mailing list.
> What I wanted to create, was a program which would receive a streaming
> video, and when it decodes a frame of the video into either a bitmap
> format or just pure RGB (perhaps stored in a char array), it would
> notify another program that it has received a frame, and the other
> program would take the RGB values and display it.
> I've already asked this question here:
> http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=805
> and rogerdpack told me to post my question on the libav mailing list.
> I have been through many websites, but they either use img_convert
> (which doesn't work) or sws_scale, which crashes when I try to use it
> with RGB.
> Could anyone help with a complete piece of code which can give me the
> RGB values of a frame?
>
> This is a part of the YUV conversion that I tried initially.
>
> i=0;
> while(av_read_frame(pFormatCtx, &packet) >= 0)
> {
> // Is this a packet from the video stream?
> if(packet.stream_index==videoStream)
> {
> // Decode video frame
> avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
>
> // Did we get a video frame?
> if(frameFinished)
> {
> // Convert the image into YUV format that SDL uses
> sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
> pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );
>
>

From rjvbertin at gmail.com Tue Nov 20 09:53:30 2012
From: rjvbertin at gmail.com (=?ISO-8859-1?Q?Ren=E9_J=2EV=2E_Bertin?=)
Date: Tue, 20 Nov 2012 09:53:30 +0100
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AB4410.1040205@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
<50AB4410.1040205@tatapowersed.com>
Message-ID: <ff8d0f6b-4ea3-419f-81cf-b38051ee1b6e@email.android.com>

Have you looked at the tutorial concerning ffmpeg wit sdl, and/or the ffplay sourcecode? Other things to check out would be the motion jpeg encoder - afaik jpg images are in rgb space.

R

Navin <nkipe at tatapowersed.com> wrote:

>Been trying these techniques, but I keep getting crashes. Help please?:
>
> avpicture_fill((AVPicture*) frame2, frame2_buffer,
>PIX_FMT_RGB24, width2, height2);
> //printf("data = %d, %d, %d, %d
>\n",frame2->data[0],frame2->data[1],frame2->data[2],frame2->data[3]);
> //printf("linesize = %d %d %d\n",frame2->linesize[0],
>frame2->linesize[1], frame2->linesize[2]);
> //printf("width = %d\n", pCodecCtx->width);
> //printf("height = %d\n", pCodecCtx->height);
> //std::cin.get();
>
> int linesize = frame2->linesize[0];
> for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
> {
> int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
> int g = frame2->data[0][xx+1];
> int b = frame2->data[0][xx+2];
> printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
>g, b);
> }
> printf("frame%d done----------------",i++);
> //for(int xx = 0; xx < width1; xx = xx + 3)
> //{
> // for(int yy = 0; yy < height1; ++yy)
> // {
> // //int p = xx*3 + yy*frame2->linesize[0];
> // //int p = xx * 3 + yy * linesize;
> // printf("yy=%d xx=%d",yy,xx);
> // int p = yy * linesize + xx;
> // printf("p=%d\n",p);
> // int r = frame2->data[0][p];
> // int g = frame2->data[0][p+1];
> // int b = frame2->data[0][p+2];
> // printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
> // }//for
> //}//for
>
>Nav
>
>On 11/20/2012 8:52 AM, Nav wrote:
>> Hi! Glad to be part of this mailing list.
>> What I wanted to create, was a program which would receive a
>streaming
>> video, and when it decodes a frame of the video into either a bitmap
>> format or just pure RGB (perhaps stored in a char array), it would
>> notify another program that it has received a frame, and the other
>> program would take the RGB values and display it.
>> I've already asked this question here:
>> http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=805
>> and rogerdpack told me to post my question on the libav mailing list.
>> I have been through many websites, but they either use img_convert
>> (which doesn't work) or sws_scale, which crashes when I try to use it
>
>> with RGB.
>> Could anyone help with a complete piece of code which can give me the
>
>> RGB values of a frame?
>>
>> This is a part of the YUV conversion that I tried initially.
>>
>> i=0;
>> while(av_read_frame(pFormatCtx, &packet) >= 0)
>> {
>> // Is this a packet from the video stream?
>> if(packet.stream_index==videoStream)
>> {
>> // Decode video frame
>> avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
>&packet);
>>
>> // Did we get a video frame?
>> if(frameFinished)
>> {
>> // Convert the image into YUV format that SDL uses
>> sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
>> pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );
>>
>>
>_______________________________________________
>Libav-user mailing list
>Libav-user at ffmpeg.org
>http://ffmpeg.org/mailman/listinfo/libav-user


From mc at enciris.com Tue Nov 20 10:06:31 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Tue, 20 Nov 2012 04:06:31 -0500
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AB4410.1040205@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
<50AB4410.1040205@tatapowersed.com>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31A9@VMBX116.ihostexchange.net>

Hi Navin,

You don't need RGB for bitmap. AFAIK BMP pictures can also embed YUV4:2:0
Here is a h264 decoding code that works fine. It may help you extracting the right parts.




//#include "stdafx.h"
#include "stdio.h"
#include "stdlib.h"
#include "avcodec/avcodec.h"

static int FindStartCode (unsigned char *Buf, int zeros_in_startcode)
{
int info;
int i;

info = 1;
for (i = 0; i < zeros_in_startcode; i++)
if(Buf[i] != 0)
info = 0;

if(Buf[i] != 1)
info = 0;
return info;
}

int getNextNal(FILE* inpf, unsigned char* Buf)
{
int pos = 0;
while(!feof(inpf) && (Buf[pos++]=fgetc(inpf))==0);

int StartCodeFound = 0;
int info2 = 0;
int info3 = 0;

while (!StartCodeFound)
{
if (feof (inpf))
{
return -1;
}
Buf[pos++] = fgetc (inpf);
info3 = FindStartCode(&Buf[pos-4], 3);
if(info3 != 1)
info2 = FindStartCode(&Buf[pos-3], 2);
StartCodeFound = (info2 == 1 || info3 == 1);
}
fseek (inpf, -4, SEEK_CUR);
return pos - 4;
}

int main()
{
int i;
FILE * inpf;
FILE * outf;
inpf = fopen("test.264", "rb");
outf = fopen("out1_720x576.yuv", "wb");
int nalLen = 0;
unsigned char* Buf = (unsigned char*)calloc ( 1000000, sizeof(char));
unsigned char* Buf1 = (unsigned char*)calloc ( 1000000, sizeof(char));
int width = 720;
int height = 576;
//unsigned char yuvData[352 * 240 * 3];


AVCodec *codec; // Codec
AVCodecContext *c; // Codec Context
AVFrame *picture; // Frame

avcodec_init();
avcodec_register_all();
codec = avcodec_find_decoder(CODEC_ID_H264);

if (!codec) {
return 0;
}
//allocate codec context
c = avcodec_alloc_context();
if(!c){
return 0;
}
//open codec
if (avcodec_open(c, codec) < 0) {
return 0;
}

//allocate frame buffer
picture = avcodec_alloc_frame();
if(!picture){
return 0;
}

while(!feof(inpf))
{
//printf("cntr: %d\n", cntr++);
nalLen = getNextNal(inpf, Buf);

//try to decode this frame
int got_picture, result;
result= avcodec_decode_video(c, picture, &got_picture, Buf, nalLen);
if(result > 0)
{
//fwrite(picture->data[0],1,width*height,outf);
//fwrite(picture->data[1],1,width*height/4,outf);
//fwrite(picture->data[2],1,width*height/4,outf);

#if 0 //this is also ok!
for(i=0; i<c->height; i++)
fwrite(picture->data[0] + i * picture->linesize[0], 1, c->width, outf );
for(i=0; i<c->height/2; i++)
fwrite(picture->data[1] + i * picture->linesize[1], 1, c->width/2, outf );
for(i=0; i<c->height/2; i++)
fwrite(picture->data[2] + i * picture->linesize[2], 1, c->width/2, outf );
#endif

//http://ffmpeg.org/doxygen/trunk/group__lavc__picture.html#g99bc554a84681f7ee6422a384007a4ca
//Copy pixel data from an AVPicture into a buffer, always assume a linesize alignment of 1.
avpicture_layout((AVPicture *)picture, c->pix_fmt, c->width, c->height, Buf1, c->width*c->height*1.5);
fwrite(Buf1, c->width*c->height*1.5, 1, outf);

}
}

if(c) {
avcodec_close(c);
av_free(c);
c = NULL;
}
if(picture) {
av_free(picture);
picture = NULL;
}
return 1;
}
Malik,


-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Navin
Sent: 20 November 2012 09:49
To: libav-user at ffmpeg.org
Subject: Re: [Libav-user] get RGB values from ffmpeg frame

Been trying these techniques, but I keep getting crashes. Help please?:

avpicture_fill((AVPicture*) frame2, frame2_buffer, PIX_FMT_RGB24, width2, height2);
//printf("data = %d, %d, %d, %d \n",frame2->data[0],frame2->data[1],frame2->data[2],frame2->data[3]);
//printf("linesize = %d %d %d\n",frame2->linesize[0],
frame2->linesize[1], frame2->linesize[2]);
//printf("width = %d\n", pCodecCtx->width);
//printf("height = %d\n", pCodecCtx->height);
//std::cin.get();

int linesize = frame2->linesize[0];
for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
{
int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
int g = frame2->data[0][xx+1];
int b = frame2->data[0][xx+2];
printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
g, b);
}
printf("frame%d done----------------",i++);
//for(int xx = 0; xx < width1; xx = xx + 3)
//{
// for(int yy = 0; yy < height1; ++yy)
// {
// //int p = xx*3 + yy*frame2->linesize[0];
// //int p = xx * 3 + yy * linesize;
// printf("yy=%d xx=%d",yy,xx);
// int p = yy * linesize + xx;
// printf("p=%d\n",p);
// int r = frame2->data[0][p];
// int g = frame2->data[0][p+1];
// int b = frame2->data[0][p+2];
// printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
// }//for
//}//for

Nav

On 11/20/2012 8:52 AM, Nav wrote:
> Hi! Glad to be part of this mailing list.
> What I wanted to create, was a program which would receive a streaming
> video, and when it decodes a frame of the video into either a bitmap
> format or just pure RGB (perhaps stored in a char array), it would
> notify another program that it has received a frame, and the other
> program would take the RGB values and display it.
> I've already asked this question here:
> http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=805
> and rogerdpack told me to post my question on the libav mailing list.
> I have been through many websites, but they either use img_convert
> (which doesn't work) or sws_scale, which crashes when I try to use it
> with RGB.
> Could anyone help with a complete piece of code which can give me the
> RGB values of a frame?
>
> This is a part of the YUV conversion that I tried initially.
>
> i=0;
> while(av_read_frame(pFormatCtx, &packet) >= 0)
> {
> // Is this a packet from the video stream?
> if(packet.stream_index==videoStream)
> {
> // Decode video frame
> avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
>
> // Did we get a video frame?
> if(frameFinished)
> {
> // Convert the image into YUV format that SDL uses
> sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
> pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );
>
>
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From cehoyos at ag.or.at Tue Nov 20 11:03:41 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 10:03:41 +0000 (UTC)
Subject: [Libav-user] get RGB values from ffmpeg frame
References: <50AAF761.30704@tatapowersed.com>
<50AB4410.1040205@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31A9@VMBX116.ihostexchange.net>
Message-ID: <loom.20121120T110040-440@post.gmane.org>

Malik Ciss? <mc at ...> writes:

> You don't need RGB for bitmap. AFAIK BMP pictures
> can also embed YUV4:2:0

I didn't know bmp supports yuv.
Do you have a sample or a source that confirms this?

Carl Eugen


From cehoyos at ag.or.at Tue Nov 20 11:08:42 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 10:08:42 +0000 (UTC)
Subject: [Libav-user] get RGB values from ffmpeg frame
References: <50AAF761.30704@tatapowersed.com>
Message-ID: <loom.20121120T110352-503@post.gmane.org>

Navin <nkipe at ...> writes:

> I have been through many websites, but they either use
> img_convert (which doesn't work) or sws_scale, which
> crashes when I try to use it with RGB.

I may miss something, but I guess this is the important
problem you should work on (see doc/examples).

Possibly unrelated:
In your source code, I did not find information which
decoder you are using.
Some decoders (everything mpeg-related) output yuv, and
if you want the data in rgb, you have to convert it
(using the sofware scaler), others (some lossless formats)
output rgb, that may already be the data you want.

Carl Eugen


From cehoyos at ag.or.at Tue Nov 20 11:00:05 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 10:00:05 +0000 (UTC)
Subject: [Libav-user] get RGB values from ffmpeg frame
References: <50AAF761.30704@tatapowersed.com>
<50AB4410.1040205@tatapowersed.com>
<ff8d0f6b-4ea3-419f-81cf-b38051ee1b6e@email.android.com>
Message-ID: <loom.20121120T105837-701@post.gmane.org>

Ren? J.V. Bertin <rjvbertin at ...> writes:

> afaik jpg images are in rgb space.

No.
(Lossless jpg supports rgb and FFmpeg supports reading
and writing lossless jpg but this is a very unusual
feature and nearly all non-FFmpeg software will not
support the images.)

Carl Eugen


From cehoyos at ag.or.at Tue Nov 20 11:12:11 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 10:12:11 +0000 (UTC)
Subject: [Libav-user] ffmpeg H.264 decoding
References: <436399CBF77B614A9032D97E1D6E8AF16335EF3101@VMBX116.ihostexchange.net>
Message-ID: <loom.20121120T110901-61@post.gmane.org>

Malik Ciss? <mc at ...> writes:

> My question:
> The decoding goes very fast and CPU usage is low.
> Can I guaranty same performance on similar CPU?s?

I may misunderstand the question, but decoding
is a deterministic process (if that is what you
want to know).
General system load and other factors that are
very difficult to foresee in real life of course
affect decoding speed.

> Is any Hardware acceleration (mmx, GPU, DXVA, etc)
> used to decode the frames?

SIMD (mmx) optimizations are hopefully used on
CPUs that support it, hardware decoding is only
used if you explicitly requested it.

Carl Eugen


From mc at enciris.com Tue Nov 20 12:00:08 2012
From: mc at enciris.com (=?utf-8?B?TWFsaWsgQ2lzc8Op?=)
Date: Tue, 20 Nov 2012 06:00:08 -0500
Subject: [Libav-user] ffmpeg H.264 decoding
In-Reply-To: <loom.20121120T110901-61@post.gmane.org>
References: <436399CBF77B614A9032D97E1D6E8AF16335EF3101@VMBX116.ihostexchange.net>
<loom.20121120T110901-61@post.gmane.org>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31D4@VMBX116.ihostexchange.net>

Carl Eugen Hoyos <cehoyos at ...> writes:
Malik Ciss? <mc at ...> writes:

>SIMD (mmx) optimizations are hopefully used on CPUs that support it, hardware decoding is only >used if you explicitly requested it.
This is what I wanted to know.
How can I force hw decoding? Is there any good example code for this?

Thanks, Malik Cisse

From nkipe at tatapowersed.com Tue Nov 20 12:08:58 2012
From: nkipe at tatapowersed.com (Navin)
Date: Tue, 20 Nov 2012 16:38:58 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <loom.20121120T110352-503@post.gmane.org>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
Message-ID: <50AB64CA.8000706@tatapowersed.com>

Hey thanks everyone! Still trying/struggling with the code sent by Malik
because avcodec_decode_video is deprecated.
Thought I'd post the entire code here for anyone who could help or for
anyone who needs such code in future. This is the corrected version of
ffmpeg's tutorial02, with SDL added to it too.
Problems are:
1. Since I was facing trouble with RGB, I thought I'd use SDL to output
the frames as bitmaps, but although I could store the bitmap in an area
in memory, it threw an exception when I tried extracting it out of memory.
2. Besides, when at least trying to output the bitmap to hard disk, I'm
unable to output it unless SDL_DisplayYUVOverlay(bmp, &rect); is called
(and I don't want to call it because I don't want the video to be
displayed. I just want the bitmap to be output).
3. Main problem is still with extracting RGB, so would be grateful for
any help. If not RGB, then at least with question 1 (how to store and
extract the BMP from SDL memory).

The code:

extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
}
#include<iostream>
#include <SDL.h>
#include <SDL_thread.h>
#include "SDL_opengl.h"
#include <gl/gl.h>
#include <gl/glu.h>

//#ifdef __MINGW32__
#undef main /* Prevents SDL from overriding main() */
//#endif

#include <stdio.h>



//------------------------
// These functions should not be used except from pointers in a RWops
static int myseekfunc(SDL_RWops *context, int offset, int whence) {
SDL_SetError("Can't seek in this kind of RWops"); return(-1); }
static int myreadfunc(SDL_RWops *context, void *ptr, int size, int
maxnum) { memset(ptr,0,size*maxnum); return(maxnum); }
static int mywritefunc(SDL_RWops *context, const void *ptr, int size,
int num) { return(num); }
static int myclosefunc(SDL_RWops *context)
{
if(context->type != 0xdeadbeef) { SDL_SetError("Wrong kind of RWops
for myclosefunc()"); return(-1); }
free(context->hidden.unknown.data1);
SDL_FreeRW(context);
return(0);
}

// Note that this function is NOT static -- we want it directly callable
from other source files
SDL_RWops *MyCustomRWop()
{
SDL_RWops *c=SDL_AllocRW();
if(c==NULL) return(NULL);

c->seek =myseekfunc;
c->read =myreadfunc;
c->write=mywritefunc;
c->close=myclosefunc;
c->type =0xdeadbeef;
printf("deadbeef=%d\n",c->type);
c->hidden.unknown.data1=malloc(100000);
return(c);
}


int main(int argc, char *argv[])
{
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVPacket packet;
int frameFinished;
//float aspect_ratio;

AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;

SDL_Overlay *bmp = NULL;
SDL_Surface *screen = NULL;
SDL_Surface *screen2 = NULL;
SDL_Surface *rgbscreen = NULL;
SDL_Rect rect;
SDL_Event event;


if(argc < 2) { fprintf(stderr, "Usage: test.exe <videofile>\n");
exit(1); }
// Register all formats and codecs
av_register_all();

if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
exit(1); }

// Open video file
if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
return -1; // Couldn't open file

// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0) return -1; //
Couldn't find stream information

// Dump information about file onto standard error
//av_dump_format(pFormatCtx, 0, argv[1], 0);

// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i; break; }
if(videoStream==-1) return -1; // Didn't find a video stream

// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;

// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n");
return -1;} // Codec not found

// Open codec
if(avcodec_open2(pCodecCtx, pCodec, &optionsDict) < 0) return -1;
// Could not open codec

// Allocate video frame
pFrame = avcodec_alloc_frame();

#if defined(TRIAL)
int width1 = pCodecCtx->width, height1 = pCodecCtx->height, width2
= pCodecCtx->width, height2 = pCodecCtx->height;
struct SwsContext *resize = sws_getContext(width1, height1,
PIX_FMT_YUV420P, width2, height2, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
NULL, NULL);
AVFrame* frame1 = avcodec_alloc_frame(); // this is your original frame
AVFrame* frame2 = avcodec_alloc_frame();
int num_bytes = avpicture_get_size(PIX_FMT_RGB24, width2, height2);
uint8_t* frame2_buffer = (uint8_t *) av_malloc(num_bytes *
sizeof(uint8_t));

#endif

// Make a screen to put our video
#ifndef __DARWIN__
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
0, 0);
#else
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
24, 0);
#endif
if(!screen) { fprintf(stderr, "SDL: could not set video mode -
exiting\n"); exit(1); }

// Allocate a place to put our YUV image on that screen
bmp = SDL_CreateYUVOverlay(pCodecCtx->width,
pCodecCtx->height, SDL_YV12_OVERLAY, screen);

sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL );

// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

// Did we get a video frame?
if(frameFinished)
{
#if defined(TRIAL)
avpicture_fill((AVPicture*) frame2, frame2_buffer,
PIX_FMT_RGB24, width2, height2);
printf("data = %d \n",frame2->data[0]);
printf("linesize = %d %d %d\n",frame2->linesize[0],
frame2->linesize[1], frame2->linesize[2]);
printf("width = %d\n", pCodecCtx->width);
printf("height = %d\n", pCodecCtx->height);
std::cin.get();

int linesize = frame2->linesize[0];
for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
{
int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
int g = frame2->data[0][xx+1];
int b = frame2->data[0][xx+2];
printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
g, b);
}
printf("frame%d done----------------",i++);
//for(int xx = 0; xx < width1; xx = xx + 3)
//{
// for(int yy = 0; yy < height1; ++yy)
// {
// //int p = xx*3 + yy*frame2->linesize[0];
// //int p = xx * 3 + yy * linesize;
// printf("yy=%d xx=%d",yy,xx);
// int p = yy * linesize + xx;
// printf("p=%d\n",p);
// int r = frame2->data[0][p];
// int g = frame2->data[0][p+1];
// int b = frame2->data[0][p+2];
// printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
// }//for
//}//for


// frame1 should be filled by now (eg using avcodec_decode_video)
sws_scale(resize, frame1->data, frame1->linesize, 0, height1,
frame2->data, frame2->linesize);
#endif

SDL_LockYUVOverlay(bmp);//-----------lock

AVPicture pict;
pict.data[0] = bmp->pixels[0];
pict.data[1] = bmp->pixels[2];
pict.data[2] = bmp->pixels[1];

pict.linesize[0] = bmp->pitches[0];
pict.linesize[1] = bmp->pitches[2];
pict.linesize[2] = bmp->pitches[1];

// Convert the image into YUV format that SDL uses
sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );

SDL_UnlockYUVOverlay(bmp);//-----------unlock

rect.x = 0;
rect.y = 0;
rect.w = pCodecCtx->width;
rect.h = pCodecCtx->height;
SDL_DisplayYUVOverlay(bmp, &rect);

//printf("sizeof screen = %d\n",sizeof(*screen));

if (++i != 0)
{
char numberbuffer [33], str[80]; strcpy (str, "bitmap");
strcat (str, itoa(i, numberbuffer, 10)); strcat (str, ".bmp");//file for
storing image
//---------this saves it as a bitmap to filestream. But now
how to extract it?
//SDL_RWops *filestream = MyCustomRWop();//SDL_AllocRW();
//SDL_SaveBMP_RW (screen, filestream, i);

//screen2 = SDL_LoadBMP_RW(filestream,1);//LOADING IS THE
PROBLEM HERE. DON'T KNOW WHY
//filestream->close;
//SDL_SaveBMP(screen2, str);

SDL_SaveBMP(screen, str);//WORKS: saves frame to a file as
a bitmap
}

}//if(frameFinished)
}//if

// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch(event.type)
{
case SDL_QUIT:
SDL_Quit();
exit(0);
break;
default:
break;
}//switch
}//while

av_free(pFrame);// Free the YUV frame
avcodec_close(pCodecCtx);// Close the codec
avformat_close_input(&pFormatCtx);// Close the video file

return 0;
}//main

Nav

On 11/20/2012 3:38 PM, Carl Eugen Hoyos wrote:
> Navin<nkipe at ...> writes:
>
>> I have been through many websites, but they either use
>> img_convert (which doesn't work) or sws_scale, which
>> crashes when I try to use it with RGB.
> I may miss something, but I guess this is the important
> problem you should work on (see doc/examples).
>
> Possibly unrelated:
> In your source code, I did not find information which
> decoder you are using.
> Some decoders (everything mpeg-related) output yuv, and
> if you want the data in rgb, you have to convert it
> (using the sofware scaler), others (some lossless formats)
> output rgb, that may already be the data you want.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>

From mc at enciris.com Tue Nov 20 12:21:28 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Tue, 20 Nov 2012 06:21:28 -0500
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AB64CA.8000706@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31D6@VMBX116.ihostexchange.net>

Hi Navin,
Here is an example with SDL and avcodec_decode_video2 (which is not deprecated).

int main(int argc, char **argv)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int got_picture, len;
AVFrame *picture;
int out_size_y;
int out_size_cr;
AVPacket pkt;
int result;
unsigned char *pIdxBuffer;
int IdxBufferSize;
int IdxByteTransfered;
unsigned char *pHdrBuffer;
int HdrBufferSize;
int HdrByteTransfered;
unsigned long long fileSize;
FILE *pFileAsf;
int isFirstTime = 1;
int Status = LT_SUCCESS;
int FrameType = 0;
int LoopCnt = 0;
metaD_t metaD;
unsigned char *pVidData = (unsigned char *)malloc(MAX_FR_SZ);
int vidBytesXfered = 0;
unsigned char *pAudData = (unsigned char *)malloc(MAX_FR_SZ);
int audBytesXfered = 0;
int noSamples;//number of audio samples
unsigned char *pAsfData = (unsigned char *)malloc(MAX_FR_SZ);
int asfBytesXfered = 0;
char baseName[50] = "output_asf";
char fileName[50];
char strNumber[20];
char strHdr[5] = ".asf";
FILE *outfile = fopen("test_720x480.yuv", "wb");//output decompressed file
FILE *outfile1 = fopen("test_720x480.h264", "wb");//output decompressed file
int i,j;
//########### sdl stuff ################
SDL_Surface *screen;
SDL_Overlay *bmp;
SDL_Rect rect;
bool running;
SDL_Event event;
BOOL toggle = 1;

printf("Video decoding\n");

//init
init_user_data(&metaD);

LtHandle = 0;
pIdxBuffer = (unsigned char *)malloc(ASF_IDX_SZ);
IdxBufferSize = ASF_IDX_SZ;
pHdrBuffer = (unsigned char *)malloc(metaD.asf_packet_size);
HdrBufferSize = metaD.asf_packet_size;

//init board
//code has been removed...

//out_size_y = metaD.vc1_width * metaD.vc1_height;
//out_size_cr = (metaD.vc1_width * metaD.vc1_height)/4;

get_hdr_info(&metaD);

result = init_ffmpeg_lib();
if(result < 0)
{
fprintf(stderr, "Error while initializing FFMPEG lib\n");
exit(1);
}

/* find the video decoder */
if(IS_H264)
{
codec = avcodec_find_decoder(CODEC_ID_H264);
}
else
{
codec = avcodec_find_decoder(CODEC_ID_VC1);
}
if (!codec)
{
fprintf(stderr, "codec not found\n");
exit(1);
}

c= avcodec_alloc_context();

//###############################################

//###############################################


picture= (AVFrame *)avcodec_alloc_frame();


//h264
if(IS_H264)
{
//c->extradata = extra;
//c->extradata_size = 14;
}
else
{
//vc1
//this is important
c->width = metaD.vc1_width;
c->height = metaD.vc1_height;
c->extradata = metaD.pTopLevelHeader;
c->extradata_size = metaD.codec_data_len;
}

/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}

if (!outfile)
{
av_free(c);
exit(1);
}

//######## SDL stuff ############################
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
{
fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
exit(1);
}
screen = SDL_SetVideoMode(metaD.vc1_width, metaD.vc1_height, 0, 0);
if(!screen)
{
fprintf(stderr, "SDL: could not set video mode - exiting\n");
exit(1);
}

// Allocate a place to put our YUV image on that screen
bmp = SDL_CreateYUVOverlay(metaD.vc1_width,
metaD.vc1_height,
SDL_YV12_OVERLAY,
screen);
//######## end SDL stuff ############################

LT_StartVc1(LtHandle);
running = TRUE;
while(running)//audio video fetch loop
{
printf("LoopCnt: %d\r", LoopCnt++);

Status = LT_GetVc1Frame(LtHandle, pVidData, MAX_FR_SZ, &vidBytesXfered, &FrameType);
if( Status < 0 ) // is an error occured ?
{
printf("Vc1 fetch error\n");
LT_GetError(Status);
return Status;
}
fwrite(pVidData, sizeof(char), vidBytesXfered, outfile1);

//decode VC-1 image
av_init_packet(&pkt);
pkt.data = pVidData;
pkt.size = vidBytesXfered;
len = avcodec_decode_video2(c, picture, &got_picture, &pkt);
//len = avcodec_decode_video(c, picture, &got_picture, pVidData, vidBytesXfered);
if (len < 0)
{
fprintf(stderr, "Error while decoding frame\n");
exit(1);
}
#if 0
if (got_picture)
{
for(i=0; i<c->height; i++)
fwrite(picture->data[0] + i * picture->linesize[0], 1, c->width, outfile );
for(i=0; i<c->height/2; i++)
fwrite(picture->data[1] + i * picture->linesize[1], 1, c->width/2, outfile );
for(i=0; i<c->height/2; i++)
fwrite(picture->data[2] + i * picture->linesize[2], 1, c->width/2, outfile );
}
#endif

#if 1
// Did we get a video frame?
if(got_picture)
{
SDL_LockYUVOverlay(bmp);

AVPicture pict;
pict.data[0] = bmp->pixels[0];
pict.data[1] = bmp->pixels[2];
pict.data[2] = bmp->pixels[1];

pict.linesize[0] = bmp->pitches[0];
pict.linesize[1] = bmp->pitches[2];
pict.linesize[2] = bmp->pitches[1];

SwsContext * convert_ctx = sws_getContext ( c->width, c->height, c->pix_fmt,c->width, c->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );

sws_scale( convert_ctx, picture->data, picture->linesize, 0, c->height, pict.data, pict.linesize );

sws_freeContext ( convert_ctx );


/*
// Convert the image into YUV format that SDL uses
img_convert(&pict, PIX_FMT_YUV420P,
(AVPicture *)picture, c->pix_fmt,
c->width, c->height);
*/
SDL_UnlockYUVOverlay(bmp);

rect.x = 0;
rect.y = 0;
rect.w = c->width;
rect.h = c->height;
SDL_DisplayYUVOverlay(bmp, &rect);
//av_free_packet(&packet);
}
#endif

#if 0//only works with VC-1
//############ display ####################
//av_free_packet(&pkt);
// Did we get a video frame?
if(got_picture)
{
SDL_LockYUVOverlay(bmp);
memcpy(bmp->pixels[0], picture->data[0], c->width * c->height);//y
memcpy(bmp->pixels[2], picture->data[1], (c->width * c->height)/4);//v
memcpy(bmp->pixels[1], picture->data[2], (c->width * c->height)/4);//u
SDL_UnlockYUVOverlay(bmp);

rect.x = 0;
rect.y = 0;
rect.w = c->width;
rect.h = c->height;
SDL_DisplayYUVOverlay(bmp, &rect);//display this image
}
#endif

//should probably not be in the loop
SDL_PollEvent(&event);
switch(event.type)
{
//printf("");
case SDL_QUIT:
running = FALSE;
SDL_Quit();
exit(0);
break;
case SDL_KEYDOWN:
switch(event.key.keysym.sym)
{
case SDLK_ESCAPE:
case SDLK_q:
running = FALSE;
SDL_Quit();
exit(0);
break;
case SDLK_f:
toggle = !toggle;
break;
default:
break;
}
default:
break;
}
}
running = FALSE;

// Close dev handle
//......



FreeLtDll(m_ltDll);
printf("<-- CloseDevice\n\n");

fclose(outfile);
fclose(outfile1);
avcodec_close(c);
av_free(c);
av_free(picture);
SDL_FreeYUVOverlay( bmp );
SDL_FreeSurface( screen );
SDL_CloseAudio();
SDL_Quit();
printf("\n");
}

-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Navin
Sent: 20 November 2012 12:09
To: libav-user at ffmpeg.org
Subject: Re: [Libav-user] get RGB values from ffmpeg frame

Hey thanks everyone! Still trying/struggling with the code sent by Malik because avcodec_decode_video is deprecated.
Thought I'd post the entire code here for anyone who could help or for anyone who needs such code in future. This is the corrected version of ffmpeg's tutorial02, with SDL added to it too.
Problems are:
1. Since I was facing trouble with RGB, I thought I'd use SDL to output the frames as bitmaps, but although I could store the bitmap in an area in memory, it threw an exception when I tried extracting it out of memory.
2. Besides, when at least trying to output the bitmap to hard disk, I'm unable to output it unless SDL_DisplayYUVOverlay(bmp, &rect); is called (and I don't want to call it because I don't want the video to be displayed. I just want the bitmap to be output).
3. Main problem is still with extracting RGB, so would be grateful for any help. If not RGB, then at least with question 1 (how to store and extract the BMP from SDL memory).

The code:

extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
}
#include<iostream>
#include <SDL.h>
#include <SDL_thread.h>
#include "SDL_opengl.h"
#include <gl/gl.h>
#include <gl/glu.h>

//#ifdef __MINGW32__
#undef main /* Prevents SDL from overriding main() */
//#endif

#include <stdio.h>



//------------------------
// These functions should not be used except from pointers in a RWops
static int myseekfunc(SDL_RWops *context, int offset, int whence) {
SDL_SetError("Can't seek in this kind of RWops"); return(-1); }
static int myreadfunc(SDL_RWops *context, void *ptr, int size, int
maxnum) { memset(ptr,0,size*maxnum); return(maxnum); }
static int mywritefunc(SDL_RWops *context, const void *ptr, int size,
int num) { return(num); }
static int myclosefunc(SDL_RWops *context)
{
if(context->type != 0xdeadbeef) { SDL_SetError("Wrong kind of RWops
for myclosefunc()"); return(-1); }
free(context->hidden.unknown.data1);
SDL_FreeRW(context);
return(0);
}

// Note that this function is NOT static -- we want it directly callable
from other source files
SDL_RWops *MyCustomRWop()
{
SDL_RWops *c=SDL_AllocRW();
if(c==NULL) return(NULL);

c->seek =myseekfunc;
c->read =myreadfunc;
c->write=mywritefunc;
c->close=myclosefunc;
c->type =0xdeadbeef;
printf("deadbeef=%d\n",c->type);
c->hidden.unknown.data1=malloc(100000);
return(c);
}


int main(int argc, char *argv[])
{
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVPacket packet;
int frameFinished;
//float aspect_ratio;

AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;

SDL_Overlay *bmp = NULL;
SDL_Surface *screen = NULL;
SDL_Surface *screen2 = NULL;
SDL_Surface *rgbscreen = NULL;
SDL_Rect rect;
SDL_Event event;


if(argc < 2) { fprintf(stderr, "Usage: test.exe <videofile>\n");
exit(1); }
// Register all formats and codecs
av_register_all();

if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
exit(1); }

// Open video file
if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
return -1; // Couldn't open file

// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0) return -1; //
Couldn't find stream information

// Dump information about file onto standard error
//av_dump_format(pFormatCtx, 0, argv[1], 0);

// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i; break; }
if(videoStream==-1) return -1; // Didn't find a video stream

// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;

// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n");
return -1;} // Codec not found

// Open codec
if(avcodec_open2(pCodecCtx, pCodec, &optionsDict) < 0) return -1;
// Could not open codec

// Allocate video frame
pFrame = avcodec_alloc_frame();

#if defined(TRIAL)
int width1 = pCodecCtx->width, height1 = pCodecCtx->height, width2
= pCodecCtx->width, height2 = pCodecCtx->height;
struct SwsContext *resize = sws_getContext(width1, height1,
PIX_FMT_YUV420P, width2, height2, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
NULL, NULL);
AVFrame* frame1 = avcodec_alloc_frame(); // this is your original frame
AVFrame* frame2 = avcodec_alloc_frame();
int num_bytes = avpicture_get_size(PIX_FMT_RGB24, width2, height2);
uint8_t* frame2_buffer = (uint8_t *) av_malloc(num_bytes *
sizeof(uint8_t));

#endif

// Make a screen to put our video
#ifndef __DARWIN__
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
0, 0);
#else
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
24, 0);
#endif
if(!screen) { fprintf(stderr, "SDL: could not set video mode -
exiting\n"); exit(1); }

// Allocate a place to put our YUV image on that screen
bmp = SDL_CreateYUVOverlay(pCodecCtx->width,
pCodecCtx->height, SDL_YV12_OVERLAY, screen);

sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL );

// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

// Did we get a video frame?
if(frameFinished)
{
#if defined(TRIAL)
avpicture_fill((AVPicture*) frame2, frame2_buffer,
PIX_FMT_RGB24, width2, height2);
printf("data = %d \n",frame2->data[0]);
printf("linesize = %d %d %d\n",frame2->linesize[0],
frame2->linesize[1], frame2->linesize[2]);
printf("width = %d\n", pCodecCtx->width);
printf("height = %d\n", pCodecCtx->height);
std::cin.get();

int linesize = frame2->linesize[0];
for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
{
int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
int g = frame2->data[0][xx+1];
int b = frame2->data[0][xx+2];
printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
g, b);
}
printf("frame%d done----------------",i++);
//for(int xx = 0; xx < width1; xx = xx + 3)
//{
// for(int yy = 0; yy < height1; ++yy)
// {
// //int p = xx*3 + yy*frame2->linesize[0];
// //int p = xx * 3 + yy * linesize;
// printf("yy=%d xx=%d",yy,xx);
// int p = yy * linesize + xx;
// printf("p=%d\n",p);
// int r = frame2->data[0][p];
// int g = frame2->data[0][p+1];
// int b = frame2->data[0][p+2];
// printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
// }//for
//}//for


// frame1 should be filled by now (eg using avcodec_decode_video)
sws_scale(resize, frame1->data, frame1->linesize, 0, height1,
frame2->data, frame2->linesize);
#endif

SDL_LockYUVOverlay(bmp);//-----------lock

AVPicture pict;
pict.data[0] = bmp->pixels[0];
pict.data[1] = bmp->pixels[2];
pict.data[2] = bmp->pixels[1];

pict.linesize[0] = bmp->pitches[0];
pict.linesize[1] = bmp->pitches[2];
pict.linesize[2] = bmp->pitches[1];

// Convert the image into YUV format that SDL uses
sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );

SDL_UnlockYUVOverlay(bmp);//-----------unlock

rect.x = 0;
rect.y = 0;
rect.w = pCodecCtx->width;
rect.h = pCodecCtx->height;
SDL_DisplayYUVOverlay(bmp, &rect);

//printf("sizeof screen = %d\n",sizeof(*screen));

if (++i != 0)
{
char numberbuffer [33], str[80]; strcpy (str, "bitmap");
strcat (str, itoa(i, numberbuffer, 10)); strcat (str, ".bmp");//file for
storing image
//---------this saves it as a bitmap to filestream. But now
how to extract it?
//SDL_RWops *filestream = MyCustomRWop();//SDL_AllocRW();
//SDL_SaveBMP_RW (screen, filestream, i);

//screen2 = SDL_LoadBMP_RW(filestream,1);//LOADING IS THE
PROBLEM HERE. DON'T KNOW WHY
//filestream->close;
//SDL_SaveBMP(screen2, str);

SDL_SaveBMP(screen, str);//WORKS: saves frame to a file as
a bitmap
}

}//if(frameFinished)
}//if

// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch(event.type)
{
case SDL_QUIT:
SDL_Quit();
exit(0);
break;
default:
break;
}//switch
}//while

av_free(pFrame);// Free the YUV frame
avcodec_close(pCodecCtx);// Close the codec
avformat_close_input(&pFormatCtx);// Close the video file

return 0;
}//main

Nav

On 11/20/2012 3:38 PM, Carl Eugen Hoyos wrote:
> Navin<nkipe at ...> writes:
>
>> I have been through many websites, but they either use
>> img_convert (which doesn't work) or sws_scale, which
>> crashes when I try to use it with RGB.
> I may miss something, but I guess this is the important
> problem you should work on (see doc/examples).
>
> Possibly unrelated:
> In your source code, I did not find information which
> decoder you are using.
> Some decoders (everything mpeg-related) output yuv, and
> if you want the data in rgb, you have to convert it
> (using the sofware scaler), others (some lossless formats)
> output rgb, that may already be the data you want.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From cehoyos at ag.or.at Tue Nov 20 12:21:01 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 11:21:01 +0000 (UTC)
Subject: [Libav-user] ffmpeg H.264 decoding
References: <436399CBF77B614A9032D97E1D6E8AF16335EF3101@VMBX116.ihostexchange.net>
<loom.20121120T110901-61@post.gmane.org>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D4@VMBX116.ihostexchange.net>
Message-ID: <loom.20121120T121513-361@post.gmane.org>

Malik Ciss? <mc at ...> writes:

> How can I force hw decoding?

This depends on which hardware decoder you want to use,
at least the following APIs are "supported" (please
understand that in this case, this does not mean you
can use them without any additional effort, it just
means that some code necessary to use hardware
decoding is in place, you typically have to add code
inside your application):
xvmc, vdpau, va-api, dxva
OpenElec has added xvba code to their local copy of
libavcodec.
Addionally, stagefright probably allows you to use
hardware decoding in a more straight-forward manner.

> Is there any good example code for this?

Both MPlayer and vlc use (different) hardware
decoders supported by libavcodec.

Carl Eugen


From mc at enciris.com Tue Nov 20 12:30:30 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Tue, 20 Nov 2012 06:30:30 -0500
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AB64CA.8000706@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>

Hi Navin,

Also YUV4:2:0 in BMP does not seem feasible (sorry for my previous statement).
One think you can do too is use ffmpeg in a system call in your code.
This is very versatile (you need to replace 422 by 420 though)
char ffmpegString[256];//should be plenty
char fmt[] = "bmp"; //bmp, jpg, png, pgm etc...
sprintf(ffmpegString, "ffmpeg.exe -s %dx%d -vcodec rawvideo -f rawvideo -pix_fmt uyvy422 -i raw_uyvy422.yuv -f image2 raw_uyvy422.%s",RawWidth, RawHeight, fmt);
system(ffmpegString);

Malik Cisse


-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Navin
Sent: 20 November 2012 12:09
To: libav-user at ffmpeg.org
Subject: Re: [Libav-user] get RGB values from ffmpeg frame

Hey thanks everyone! Still trying/struggling with the code sent by Malik because avcodec_decode_video is deprecated.
Thought I'd post the entire code here for anyone who could help or for anyone who needs such code in future. This is the corrected version of ffmpeg's tutorial02, with SDL added to it too.
Problems are:
1. Since I was facing trouble with RGB, I thought I'd use SDL to output the frames as bitmaps, but although I could store the bitmap in an area in memory, it threw an exception when I tried extracting it out of memory.
2. Besides, when at least trying to output the bitmap to hard disk, I'm unable to output it unless SDL_DisplayYUVOverlay(bmp, &rect); is called (and I don't want to call it because I don't want the video to be displayed. I just want the bitmap to be output).
3. Main problem is still with extracting RGB, so would be grateful for any help. If not RGB, then at least with question 1 (how to store and extract the BMP from SDL memory).

The code:

extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
}
#include<iostream>
#include <SDL.h>
#include <SDL_thread.h>
#include "SDL_opengl.h"
#include <gl/gl.h>
#include <gl/glu.h>

//#ifdef __MINGW32__
#undef main /* Prevents SDL from overriding main() */
//#endif

#include <stdio.h>



//------------------------
// These functions should not be used except from pointers in a RWops
static int myseekfunc(SDL_RWops *context, int offset, int whence) {
SDL_SetError("Can't seek in this kind of RWops"); return(-1); }
static int myreadfunc(SDL_RWops *context, void *ptr, int size, int
maxnum) { memset(ptr,0,size*maxnum); return(maxnum); }
static int mywritefunc(SDL_RWops *context, const void *ptr, int size,
int num) { return(num); }
static int myclosefunc(SDL_RWops *context)
{
if(context->type != 0xdeadbeef) { SDL_SetError("Wrong kind of RWops
for myclosefunc()"); return(-1); }
free(context->hidden.unknown.data1);
SDL_FreeRW(context);
return(0);
}

// Note that this function is NOT static -- we want it directly callable
from other source files
SDL_RWops *MyCustomRWop()
{
SDL_RWops *c=SDL_AllocRW();
if(c==NULL) return(NULL);

c->seek =myseekfunc;
c->read =myreadfunc;
c->write=mywritefunc;
c->close=myclosefunc;
c->type =0xdeadbeef;
printf("deadbeef=%d\n",c->type);
c->hidden.unknown.data1=malloc(100000);
return(c);
}


int main(int argc, char *argv[])
{
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVPacket packet;
int frameFinished;
//float aspect_ratio;

AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;

SDL_Overlay *bmp = NULL;
SDL_Surface *screen = NULL;
SDL_Surface *screen2 = NULL;
SDL_Surface *rgbscreen = NULL;
SDL_Rect rect;
SDL_Event event;


if(argc < 2) { fprintf(stderr, "Usage: test.exe <videofile>\n");
exit(1); }
// Register all formats and codecs
av_register_all();

if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
exit(1); }

// Open video file
if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
return -1; // Couldn't open file

// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0) return -1; //
Couldn't find stream information

// Dump information about file onto standard error
//av_dump_format(pFormatCtx, 0, argv[1], 0);

// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i; break; }
if(videoStream==-1) return -1; // Didn't find a video stream

// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;

// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n");
return -1;} // Codec not found

// Open codec
if(avcodec_open2(pCodecCtx, pCodec, &optionsDict) < 0) return -1;
// Could not open codec

// Allocate video frame
pFrame = avcodec_alloc_frame();

#if defined(TRIAL)
int width1 = pCodecCtx->width, height1 = pCodecCtx->height, width2
= pCodecCtx->width, height2 = pCodecCtx->height;
struct SwsContext *resize = sws_getContext(width1, height1,
PIX_FMT_YUV420P, width2, height2, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
NULL, NULL);
AVFrame* frame1 = avcodec_alloc_frame(); // this is your original frame
AVFrame* frame2 = avcodec_alloc_frame();
int num_bytes = avpicture_get_size(PIX_FMT_RGB24, width2, height2);
uint8_t* frame2_buffer = (uint8_t *) av_malloc(num_bytes *
sizeof(uint8_t));

#endif

// Make a screen to put our video
#ifndef __DARWIN__
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
0, 0);
#else
screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height,
24, 0);
#endif
if(!screen) { fprintf(stderr, "SDL: could not set video mode -
exiting\n"); exit(1); }

// Allocate a place to put our YUV image on that screen
bmp = SDL_CreateYUVOverlay(pCodecCtx->width,
pCodecCtx->height, SDL_YV12_OVERLAY, screen);

sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL );

// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

// Did we get a video frame?
if(frameFinished)
{
#if defined(TRIAL)
avpicture_fill((AVPicture*) frame2, frame2_buffer,
PIX_FMT_RGB24, width2, height2);
printf("data = %d \n",frame2->data[0]);
printf("linesize = %d %d %d\n",frame2->linesize[0],
frame2->linesize[1], frame2->linesize[2]);
printf("width = %d\n", pCodecCtx->width);
printf("height = %d\n", pCodecCtx->height);
std::cin.get();

int linesize = frame2->linesize[0];
for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
{
int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
int g = frame2->data[0][xx+1];
int b = frame2->data[0][xx+2];
printf("xx=%d r=%d, g=%d, b=%d \n",xx, r,
g, b);
}
printf("frame%d done----------------",i++);
//for(int xx = 0; xx < width1; xx = xx + 3)
//{
// for(int yy = 0; yy < height1; ++yy)
// {
// //int p = xx*3 + yy*frame2->linesize[0];
// //int p = xx * 3 + yy * linesize;
// printf("yy=%d xx=%d",yy,xx);
// int p = yy * linesize + xx;
// printf("p=%d\n",p);
// int r = frame2->data[0][p];
// int g = frame2->data[0][p+1];
// int b = frame2->data[0][p+2];
// printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
// }//for
//}//for


// frame1 should be filled by now (eg using avcodec_decode_video)
sws_scale(resize, frame1->data, frame1->linesize, 0, height1,
frame2->data, frame2->linesize);
#endif

SDL_LockYUVOverlay(bmp);//-----------lock

AVPicture pict;
pict.data[0] = bmp->pixels[0];
pict.data[1] = bmp->pixels[2];
pict.data[2] = bmp->pixels[1];

pict.linesize[0] = bmp->pitches[0];
pict.linesize[1] = bmp->pitches[2];
pict.linesize[2] = bmp->pitches[1];

// Convert the image into YUV format that SDL uses
sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );

SDL_UnlockYUVOverlay(bmp);//-----------unlock

rect.x = 0;
rect.y = 0;
rect.w = pCodecCtx->width;
rect.h = pCodecCtx->height;
SDL_DisplayYUVOverlay(bmp, &rect);

//printf("sizeof screen = %d\n",sizeof(*screen));

if (++i != 0)
{
char numberbuffer [33], str[80]; strcpy (str, "bitmap");
strcat (str, itoa(i, numberbuffer, 10)); strcat (str, ".bmp");//file for
storing image
//---------this saves it as a bitmap to filestream. But now
how to extract it?
//SDL_RWops *filestream = MyCustomRWop();//SDL_AllocRW();
//SDL_SaveBMP_RW (screen, filestream, i);

//screen2 = SDL_LoadBMP_RW(filestream,1);//LOADING IS THE
PROBLEM HERE. DON'T KNOW WHY
//filestream->close;
//SDL_SaveBMP(screen2, str);

SDL_SaveBMP(screen, str);//WORKS: saves frame to a file as
a bitmap
}

}//if(frameFinished)
}//if

// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch(event.type)
{
case SDL_QUIT:
SDL_Quit();
exit(0);
break;
default:
break;
}//switch
}//while

av_free(pFrame);// Free the YUV frame
avcodec_close(pCodecCtx);// Close the codec
avformat_close_input(&pFormatCtx);// Close the video file

return 0;
}//main

Nav

On 11/20/2012 3:38 PM, Carl Eugen Hoyos wrote:
> Navin<nkipe at ...> writes:
>
>> I have been through many websites, but they either use
>> img_convert (which doesn't work) or sws_scale, which
>> crashes when I try to use it with RGB.
> I may miss something, but I guess this is the important
> problem you should work on (see doc/examples).
>
> Possibly unrelated:
> In your source code, I did not find information which
> decoder you are using.
> Some decoders (everything mpeg-related) output yuv, and
> if you want the data in rgb, you have to convert it
> (using the sofware scaler), others (some lossless formats)
> output rgb, that may already be the data you want.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From nkipe at tatapowersed.com Tue Nov 20 12:38:41 2012
From: nkipe at tatapowersed.com (Navin)
Date: Tue, 20 Nov 2012 17:08:41 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
Message-ID: <50AB6BC1.30308@tatapowersed.com>

Actually, Malik; the idea was not to use the ffmpeg.exe. I wanted the
bitmap/RGB values to be stored in memory (a datastructure) in my program
and then I'd pass the pointer of the datastructure to a client program
which is running at the same time. The client program would then take
the info from the datastructure and use it to display the video frame.
Thanks for the code you posted. Macros and datatypes like LT_SUCCESS,
MAX_FR_SZ, metaD_t, LtHandle, ASF_IDX_SZ etc. seem new. Am a bit puzzled.

Navin

On 11/20/2012 5:00 PM, Malik Ciss? wrote:
> Hi Navin,
>
> Also YUV4:2:0 in BMP does not seem feasible (sorry for my previous statement).
> One think you can do too is use ffmpeg in a system call in your code.
> This is very versatile (you need to replace 422 by 420 though)
> char ffmpegString[256];//should be plenty
> char fmt[] = "bmp"; //bmp, jpg, png, pgm etc...
> sprintf(ffmpegString, "ffmpeg.exe -s %dx%d -vcodec rawvideo -f rawvideo -pix_fmt uyvy422 -i raw_uyvy422.yuv -f image2 raw_uyvy422.%s",RawWidth, RawHeight, fmt);
> system(ffmpegString);
>
> Malik Cisse
>

From mc at enciris.com Tue Nov 20 13:46:40 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Tue, 20 Nov 2012 07:46:40 -0500
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <50AB6BC1.30308@tatapowersed.com>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>

Navin,
> Macros and datatypes like LT_SUCCESS, MAX_FR_SZ, metaD_t, LtHandle, ASF_IDX_SZ etc. seem new. Am a bit puzzled.
The code is taken from a proprietary stuff and I did not want to disclose everything but the main idea is there.


-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Navin
Sent: 20 November 2012 12:39
To: libav-user at ffmpeg.org
Subject: Re: [Libav-user] get RGB values from ffmpeg frame

Actually, Malik; the idea was not to use the ffmpeg.exe. I wanted the bitmap/RGB values to be stored in memory (a datastructure) in my program and then I'd pass the pointer of the datastructure to a client program which is running at the same time. The client program would then take the info from the datastructure and use it to display the video frame.
Thanks for the code you posted. Macros and datatypes like LT_SUCCESS, MAX_FR_SZ, metaD_t, LtHandle, ASF_IDX_SZ etc. seem new. Am a bit puzzled.

Navin

On 11/20/2012 5:00 PM, Malik Ciss? wrote:
> Hi Navin,
>
> Also YUV4:2:0 in BMP does not seem feasible (sorry for my previous statement).
> One think you can do too is use ffmpeg in a system call in your code.
> This is very versatile (you need to replace 422 by 420 though) char
> ffmpegString[256];//should be plenty
> char fmt[] = "bmp"; //bmp, jpg, png, pgm etc...
> sprintf(ffmpegString, "ffmpeg.exe -s %dx%d -vcodec rawvideo -f rawvideo -pix_fmt uyvy422 -i raw_uyvy422.yuv -f image2 raw_uyvy422.%s",RawWidth, RawHeight, fmt);
> system(ffmpegString);
>
> Malik Cisse
>
_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From mc at enciris.com Tue Nov 20 13:50:23 2012
From: mc at enciris.com (=?utf-8?B?TWFsaWsgQ2lzc8Op?=)
Date: Tue, 20 Nov 2012 07:50:23 -0500
Subject: [Libav-user] ffmpeg H.264 decoding
In-Reply-To: <loom.20121120T110901-61@post.gmane.org>
References: <436399CBF77B614A9032D97E1D6E8AF16335EF3101@VMBX116.ihostexchange.net>
<loom.20121120T110901-61@post.gmane.org>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF31F4@VMBX116.ihostexchange.net>

Carl Eugen, Thanks for the info. This is what I wanted to know.

-----Original Message-----
From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Carl Eugen Hoyos
Sent: 20 November 2012 11:12
To: libav-user at ffmpeg.org
Subject: Re: [Libav-user] ffmpeg H.264 decoding

Malik Ciss? <mc at ...> writes:

> My question:
> The decoding goes very fast and CPU usage is low.
> Can I guaranty same performance on similar CPU?s?

I may misunderstand the question, but decoding is a deterministic process (if that is what you want to know).
General system load and other factors that are very difficult to foresee in real life of course affect decoding speed.

> Is any Hardware acceleration (mmx, GPU, DXVA, etc) used to decode the
> frames?

SIMD (mmx) optimizations are hopefully used on CPUs that support it, hardware decoding is only used if you explicitly requested it.

Carl Eugen

_______________________________________________
Libav-user mailing list
Libav-user at ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

From kamlendra at vvidiacom.com Tue Nov 20 15:00:32 2012
From: kamlendra at vvidiacom.com (kamlendra chandra)
Date: Tue, 20 Nov 2012 06:00:32 -0800 (PST)
Subject: [Libav-user] setting framerate using libav
Message-ID: <1353420032782-4656082.post@n4.nabble.com>

hi all,

I am new to libav, i am trying to transcode a video using libav by taking
help from example code decoding_encoding.c given with ffmpeg source code. I
am trying to transcode a video in mp4 container to a different codec
CODEC_ID_MPEG1VIDEO but everytime program exits throwing
[mpeg1video @ 0x15a12f60] framerate not set
could not open codec


/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* put sample parameters */
c_out->bit_rate = c ->bit_rate;
/* resolution must be a multiple of two */
c_out->width = c->width;
c_out->height = c->height;
/* frames per second */
c_out->time_base= c->time_base;
c_out->gop_size = c->gop_size; /* emit one intra frame every ten frames
*/
c_out->max_b_frames=c->max_b_frames;
c_out->pix_fmt = c->pix_fmt;

if(codec_id == CODEC_ID_MPEG1VIDEO)
av_opt_set(c->priv_data, "preset", "slow", 0);

/* open it */
if (avcodec_open2(c_out, codec_out, NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}

plz suggest how can we set frame rate for encoding. thankx in advance

kamlendra



--
View this message in context: http://libav-users.943685.n4.nabble.com/setting-framerate-using-libav-tp4656082.html
Sent from the libav-users mailing list archive at Nabble.com.

From kamlendra at vvidiacom.com Tue Nov 20 14:45:04 2012
From: kamlendra at vvidiacom.com (kamlendra chandra)
Date: Tue, 20 Nov 2012 05:45:04 -0800 (PST)
Subject: [Libav-user] setting framerate using libav
Message-ID: <1353419104364-4656081.post@n4.nabble.com>

hi all,

I am new to libav, i am trying to transcode a video using libav by taking
help from example code decoding_encoding.c given with ffmpeg source code. I
am trying to transcode a video in mp4 container to a different codec
CODEC_ID_MPEG1VIDEO but everytime program exits throwing
*[mpeg1video @ 0x15a12f60] framerate not set
could not open codec*


/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* put sample parameters */
c_out->bit_rate = c ->bit_rate;
/* resolution must be a multiple of two */
c_out->width = c->width;
c_out->height = c->height;
/* frames per second */
c_out->time_base= c->time_base;
c_out->gop_size = c->gop_size; /* emit one intra frame every ten frames
*/
c_out->max_b_frames=c->max_b_frames;
c_out->pix_fmt = c->pix_fmt;

if(codec_id == CODEC_ID_MPEG1VIDEO)
av_opt_set(c->priv_data, "preset", "slow", 0);

/* open it */
if (avcodec_open2(c_out, codec_out, NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}

plz suggest how can we set frame rate for encoding. thankx in advance

kamlendra







--
View this message in context: http://libav-users.943685.n4.nabble.com/setting-framerate-using-libav-tp4656081.html
Sent from the libav-users mailing list archive at Nabble.com.

From b.bivas at gmail.com Tue Nov 20 15:01:29 2012
From: b.bivas at gmail.com (Bivas Bhattacharya)
Date: Tue, 20 Nov 2012 19:31:29 +0530
Subject: [Libav-user] Frame drop
Message-ID: <CAOP2uXNLobMfbDp0D0WV4oa=kx2Wc+pv62AWiEVHrrwu6PvP+Q@mail.gmail.com>

Hi

I am trying to simulate the situation where some percentage of I P B frames
are getting dropped or corrupted. I want to observe effect on video quality.

Is there any way to create this situation just by some trick as an user of
ffmpeg?

If not can anyone give me some hints how should I proceed and which file
should I look into.

Regards
Bivas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121120/853a0876/attachment.html>

From cehoyos at ag.or.at Tue Nov 20 21:16:22 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Tue, 20 Nov 2012 20:16:22 +0000 (UTC)
Subject: [Libav-user] Frame drop
References: <CAOP2uXNLobMfbDp0D0WV4oa=kx2Wc+pv62AWiEVHrrwu6PvP+Q@mail.gmail.com>
Message-ID: <loom.20121120T211456-52@post.gmane.org>

Bivas Bhattacharya <b.bivas at ...> writes:

> I am trying to simulate the situation where some percentage
> of I P B frames are getting dropped or corrupted. I want to
> observe effect on video quality.
>
> Is there any way to create this situation just by some trick
> as an user of ffmpeg?
>
> If not can anyone give me some hints how should I proceed
> and which file should I look into.

While your question is missing most necessary information
I believe you can abuse the image2 muxer to create the
frames that you can then either remove or damage before
using the image2 demuxer (or cat) to recreate a media file.

Carl Eugen


From nkipe at tatapowersed.com Wed Nov 21 06:34:41 2012
From: nkipe at tatapowersed.com (Navin)
Date: Wed, 21 Nov 2012 11:04:41 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
Message-ID: <50AC67F1.4000402@tatapowersed.com>

@Everyone: I had seen the doc/examples code and every other possible
source of information but there seems to be some problem causing the
crashes when I use RGB. Even for the below Tutorial01 code of ffmpeg for
example.
Am using the latest ffmpeg-win32-dev and I've tried with a wmv, many mp4
(from youtube, h264 (high) (avc1/0x31637661), yuv420p 1280x720,
1139kb/s, 24fps) files and avi too, but they're all crashing with
Unhandled exception, Access violation writing location 0x002a3000 at the
line "sws_scale". Even in the previous complete program that I posted to
the mailing list, getting the RGB values were a problem because of a
similar crash while iterating through frame->data.
Could anyone please help out with at least this program? Is it working
on your end?


#include "stdio.h"
#include "stdlib.h"
extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
}
#include<iostream>


void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;

sprintf(szFilename, "frame%d.ppm", iFrame);// Open file
pFile=fopen(szFilename, "wb");
if(pFile==NULL) return;
fprintf(pFile, "P6\n%d %d\n255\n", width, height);// Write header
for(y=0; y<height; y++) fwrite(pFrame->data[0]+y*pFrame->linesize[0],
1, width*3, pFile);// Write pixel data
fclose(pFile);// Close file
}//SaveFrame

int main(int argc, char *argv[])
{
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVFrame *pFrameRGB = NULL;
AVPacket packet;
int frameFinished;
int numBytes;
uint8_t *buffer = NULL;

AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;

if(argc < 2) { printf("Please provide a movie file\n"); return -1; }
av_register_all();// Register all formats and codecs

if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
return -1; // Couldn't open file

if(avformat_find_stream_info(pFormatCtx, NULL)<0) return -1; //
Couldn't find stream information

// Dump information about file onto standard error
av_dump_format(pFormatCtx, 0, argv[1], 0);

// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i; break; }
if(videoStream==-1) return -1; // Didn't find a video stream

// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;

// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n"); return
-1; }// Codec not found

if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0) return -1; //
Could not open codec

pFrame=avcodec_alloc_frame();// Allocate video frame

// Allocate an AVFrame structure
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL) return -1;

// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB24,
SWS_BILINEAR, NULL, NULL, NULL );

// Assign appropriate parts of buffer to image planes in pFrameRGB.
Note that pFrameRGB is an AVFrame, but AVFrame is a superset of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);

// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0)
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

// Did we get a video frame?
if(frameFinished)
{
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data,
pFrameRGB->linesize );

// Save the frame to disk
if(++i<=5) SaveFrame(pFrameRGB, pCodecCtx->width,
pCodecCtx->height, i);
}
}

// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}//while

av_free(buffer);av_free(pFrameRGB);// Free the RGB image
av_free(pFrame);// Free the YUV frame
avcodec_close(pCodecCtx);// Close the codec
avformat_close_input(&pFormatCtx);// Close the video file

return 0;
}

Navin


On 11/20/2012 6:16 PM, Malik Ciss? wrote:
> Navin,
>> Macros and datatypes like LT_SUCCESS, MAX_FR_SZ, metaD_t, LtHandle, ASF_IDX_SZ etc. seem new. Am a bit puzzled.
> The code is taken from a proprietary stuff and I did not want to disclose everything but the main idea is there.
>

From adrozdoff at gmail.com Wed Nov 21 07:42:31 2012
From: adrozdoff at gmail.com (hatred)
Date: Wed, 21 Nov 2012 17:42:31 +1100
Subject: [Libav-user] Troubles with transcoding
Message-ID: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>

Hi List!

I have FFMPEG 1.0 installed in my ArchLinux box, I try to write simple
audio+video transcoder example using existings doc/examples/muxing.c and
doc/examples/demuxing.c ones and I have trouble: I can't listen sound in
resulting file.

Source example with command line to compile: http://pastebin.com/F9R5qpPz
Compile:
g++ -Wall -O2 -g -D__STDC_CONSTANT_MACROS -o video-sound-transcoding-test
video-sound-transcoding-test.cpp -lm -lavdevice -lavformat -lavfilter
-lavcodec -lswresample -lswscale -lavutil

run:
./video-sound-transcoding-test input_file.avi out.ts

I test only for MPEGTS output as more needed for me.

Other info:
1. If input file contain only sound stream transcoding finished successfuly
and I can listen sound in resulting file well.
2. If I comment code and skip video stream in input file transcoding
finished successfuly also and I can listen sound in resulting file well.
3. Video transcoded very well
4. transcoding with ffmpeg is valid: ffmpeg -i input_file.avi -f mpegts
out.ts but I don't fully understand its code.

So I don't understand how to correctly I must encode/muxing audio+video and
all suggestions is welcome!

Thank for every answer, List!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121121/37461375/attachment.html>

From b.bivas at gmail.com Wed Nov 21 10:19:20 2012
From: b.bivas at gmail.com (Bivas)
Date: Wed, 21 Nov 2012 09:19:20 +0000 (UTC)
Subject: [Libav-user] Frame drop
References: <CAOP2uXNLobMfbDp0D0WV4oa=kx2Wc+pv62AWiEVHrrwu6PvP+Q@mail.gmail.com>
<loom.20121120T211456-52@post.gmane.org>
Message-ID: <loom.20121121T100356-879@post.gmane.org>

Can you let me know which file will be good to look into and how to understand
type of packet.





From hannes.wuerfel at student.hpi.uni-potsdam.de Wed Nov 21 12:25:27 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Wed, 21 Nov 2012 12:25:27 +0100
Subject: [Libav-user] Troubles with transcoding
In-Reply-To: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>
References: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>
Message-ID: <50ACBA27.10202@student.hpi.uni-potsdam.de>

Am 21.11.2012 07:42, schrieb hatred:
> Hi List!
>
> I have FFMPEG 1.0 installed in my ArchLinux box, I try to write simple
> audio+video transcoder example using existings doc/examples/muxing.c
> and doc/examples/demuxing.c ones and I have trouble: I can't listen
> sound in resulting file.
>
> Source example with command line to compile: http://pastebin.com/F9R5qpPz
> Compile:
> g++ -Wall -O2 -g -D__STDC_CONSTANT_MACROS -o
> video-sound-transcoding-test video-sound-transcoding-test.cpp -lm
> -lavdevice -lavformat -lavfilter -lavcodec -lswresample -lswscale -lavutil
>
> run:
> ./video-sound-transcoding-test input_file.avi out.ts
>
> I test only for MPEGTS output as more needed for me.
>
> Other info:
> 1. If input file contain only sound stream transcoding finished
> successfuly and I can listen sound in resulting file well.
> 2. If I comment code and skip video stream in input file transcoding
> finished successfuly also and I can listen sound in resulting file well.
> 3. Video transcoded very well
> 4. transcoding with ffmpeg is valid: ffmpeg -i input_file.avi -f
> mpegts out.ts but I don't fully understand its code.
>
> So I don't understand how to correctly I must encode/muxing
> audio+video and all suggestions is welcome!
>
> Thank for every answer, List!
Hi,

it seems that you are stucked the same way that I do for several days.
I really don't want to take over your thread but I seems to me that we
are both almost in front of the last barrier.
I've got almost the same code as you (merged together from demuxing.c
and muxing.c).
So perhaps with the help of some advanced ffmpeg user/developer on the
list we can figure out our problem.
In my case, I'd like to manipulate video frames of the file with some
image processing kernels etc. and write the file with the same codecs
they have back to disc, but of cause with processed frames.
So basically I need a "transcoder" from the codec to the same codec.

What I didn't have is your code fragment:
"if (outCtx.videoCodecCtx->coded_frame->pts != AV_NOPTS_VALUE)
{
outPkt.pts =
av_rescale_q(outCtx.videoCodecCtx->coded_frame->pts,
outCtx.videoCodecCtx->time_base,
outCtx.videoStream->time_base);
}"

If I put it into my code almost all or no video frames are encoded
succesfully and the output is:
[libx264 @ 0000000002620080] specified frame type (3) at 1314 is not
compatible with keyframe interval
[mp4 @ 00000000034ff080] pts (0) < dts (1352000) in stream 0
Error while writing video frame

I looked into ffmpeg.c as well but I had no time to step threw it while
transcoding, because I'm currently working on a windows machine.
Perhaps you can explane to me why we have to rescale the presentation
time stamp...

Ah and you've rescaled the audio too. I did not rescale the audio but
for every input codec I take, audio is always fine, even withouth:

"if (frame->pts == AV_NOPTS_VALUE)
{
AVRational samplesRateInv = {1,
outCtx.audioCodecCtx->sample_rate};
int64_t pts = inputAudioSamples;
frame->pts = av_rescale_q(pts, samplesRateInv,
inCtx.audioCodecCtx->time_base);
}
inputAudioSamples += frame->nb_samples;"

Why is this?

From nkipe at tatapowersed.com Wed Nov 21 12:42:31 2012
From: nkipe at tatapowersed.com (Navin)
Date: Wed, 21 Nov 2012 17:12:31 +0530
Subject: [Libav-user] Memory allocation and avpicture_fill incorrect?
Message-ID: <50ACBE26.5020102@tatapowersed.com>

I was trying out the updated tutorial01 code of
ffmpeg:https://github.com/mpenkov/ffmpeg-tutorial
since the program crashed unexpectedly at sws_scale with an "Access
violation writing location" error, I was trying to figure out what went
wrong.
Two questions:
1. I looked into the source for avpicture_get_size and saw a lot of code
that I couldn't understand. Why are such complex operations performed in
the function with data and linesize, when the width, height and pixel
format value multiplication should've easily given you the number of bytes?
2. In avpicture_fill, what exactly are we filling a pFrameRGB with? The
video isn't yet decoded, so what is getting filled there? Looking
through the source just left me confused.

I'm suspecting that the memory allocation is being done incorrectly,
which is why sws_scale is crashing. I've written about this in another
mailing list thread too, titled "get RGB values from ffmpeg frame". But
this question is specific to understanding the allocation and filling.

The code I'm talking about from tutorial01:
pFrameRGB = avcodec_alloc_frame();
if (pFrameRGB==NULL) return -1;

// Determine required buffer size and allocate buffer
numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB24,
SWS_BILINEAR, NULL, NULL, NULL );

// Assign appropriate parts of buffer to image planes in pFrameRGB.
Note that pFrameRGB is an AVFrame, but AVFrame is a superset of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);

--
Navin


From cehoyos at ag.or.at Wed Nov 21 12:47:42 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Wed, 21 Nov 2012 11:47:42 +0000 (UTC)
Subject: [Libav-user] get RGB values from ffmpeg frame
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
Message-ID: <loom.20121121T124512-813@post.gmane.org>

Navin <nkipe at ...> writes:

> Am using the latest ffmpeg-win32-dev and I've tried with a wmv, many mp4
> (from youtube, h264 (high) (avc1/0x31637661), yuv420p 1280x720,
> 1139kb/s, 24fps) files and avi too, but they're all crashing with
> Unhandled exception, Access violation writing location 0x002a3000 at the
> line "sws_scale".

Did you look at doc/examples/scaling_video.c?
Because I don't see the call to av_image_alloc() in your code...

Partly related:
If you are really interested in working with FFmpeg libraries,
instead of downloading pre-compiled libraries, you should either
install msys and mingw (which provide a debugger that would
have helped you) or compile FFmpeg with msvc.

Carl Eugen


From mc at enciris.com Wed Nov 21 13:33:18 2012
From: mc at enciris.com (=?iso-8859-1?Q?Malik_Ciss=E9?=)
Date: Wed, 21 Nov 2012 07:33:18 -0500
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <loom.20121121T124512-813@post.gmane.org>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
<loom.20121121T124512-813@post.gmane.org>
Message-ID: <436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>


Carl Eugen ... writes:
>...compile FFmpeg with msvc.
This sounds interesting. Where can I find msvc sample project?

Malik Cisse

From mohammad.basseri at gmail.com Wed Nov 21 20:35:39 2012
From: mohammad.basseri at gmail.com (mohammad basseri)
Date: Wed, 21 Nov 2012 23:05:39 +0330
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
<loom.20121121T124512-813@post.gmane.org>
<436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>
Message-ID: <CABtRzDPG=sSCtFbhHaD99_xsySGwnQSyC21+nMqGxdG0qqen+Q@mail.gmail.com>

use migw in linux
1- in a virtual machine run linux
2- install mingw
3- install yasm
4- compile yasm , x264 and ffmpeg with migw


On Wed, Nov 21, 2012 at 4:03 PM, Malik Ciss? <mc at enciris.com> wrote:

>
> Carl Eugen ... writes:
> >...compile FFmpeg with msvc.
> This sounds interesting. Where can I find msvc sample project?
>
> Malik Cisse
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121121/f23f7376/attachment.html>

From cehoyos at ag.or.at Thu Nov 22 00:02:05 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Wed, 21 Nov 2012 23:02:05 +0000 (UTC)
Subject: [Libav-user] get RGB values from ffmpeg frame
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
<loom.20121121T124512-813@post.gmane.org>
<436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>
Message-ID: <loom.20121121T235434-455@post.gmane.org>

Malik Ciss? <mc at ...> writes:

> Carl Eugen ... writes:
> >...compile FFmpeg with msvc.
> This sounds interesting. Where can I find msvc sample project?

Please see the MSVC section on the platform page
of the fine documentation to find information on
how to compile FFmpeg with MSVC.

Carl Eugen


From stonerain at lycos.co.kr Thu Nov 22 02:46:17 2012
From: stonerain at lycos.co.kr (=?UTF-8?B?7J207ISd7Jqw?=)
Date: Thu, 22 Nov 2012 10:46:17 +0900
Subject: [Libav-user] =?utf-8?q?How_can_i_capture_audio_from_microphone=2E?=
Message-ID: <216d243b593632ae357956a9781ed78c$3492f55c@mail3.nate.com>

An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/69d94c7f/attachment.html>

From adrozdoff at gmail.com Thu Nov 22 04:32:06 2012
From: adrozdoff at gmail.com (hatred)
Date: Thu, 22 Nov 2012 14:32:06 +1100
Subject: [Libav-user] Troubles with transcoding
In-Reply-To: <50ACBA27.10202@student.hpi.uni-potsdam.de>
References: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>
<50ACBA27.10202@student.hpi.uni-potsdam.de>
Message-ID: <CAP9VioeNu3LvN9wxxMGdthiTrGRK5P5ypT+JJj5qWY5UFCbY9g@mail.gmail.com>

Hi, Hannes!

My comments below. Worked code also.

2012/11/21 Hannes Wuerfel <hannes.wuerfel at student.hpi.uni-potsdam.de>

> Am 21.11.2012 07:42, schrieb hatred:
>
> In my case, I'd like to manipulate video frames of the file with some
> image processing kernels etc. and write the file with the same codecs they
> have back to disc, but of cause with processed frames.
> So basically I need a "transcoder" from the codec to the same codec.
>
> What I didn't have is your code fragment:
> "if (outCtx.videoCodecCtx->coded_frame->pts != AV_NOPTS_VALUE)
> {
> outPkt.pts =
> av_rescale_q(outCtx.videoCodecCtx->coded_frame->pts,
> outCtx.videoCodecCtx->time_base,
> outCtx.videoStream->time_base);
> }"
>

In common way Stream time base and CodecContext time base is not equal, but
regard comments in AVPacket structure pts field must be in Stream time base
units and I rescale it before muxing


>
> If I put it into my code almost all or no video frames are encoded
> succesfully and the output is:
> [libx264 @ 0000000002620080] specified frame type (3) at 1314 is not
> compatible with keyframe interval
> [mp4 @ 00000000034ff080] pts (0) < dts (1352000) in stream 0
> Error while writing video frame
>
>
It is my error: we should rescale dts field also if it is not equal to
AV_NOPTS.


> I looked into ffmpeg.c as well but I had no time to step threw it while
> transcoding, because I'm currently working on a windows machine.
> Perhaps you can explane to me why we have to rescale the presentation
> time stamp...
>

See comments above. For encoding we can set same time base to stream and
codec context and remove pts/dts rescaling.


> Ah and you've rescaled the audio too. I did not rescale the audio but for
> every input codec I take, audio is always fine, even withouth:
>
> "if (frame->pts == AV_NOPTS_VALUE)
> {
> AVRational samplesRateInv = {1,
> outCtx.audioCodecCtx->sample_rate};
> int64_t pts = inputAudioSamples;
> frame->pts = av_rescale_q(pts, samplesRateInv,
> inCtx.audioCodecCtx->time_base);
> }
> inputAudioSamples += frame->nb_samples;"
>
> Why is this?


This part of code is root couse of trouble with sound encoding and muxing.
After deconding audio samples pts value always equal to AV_NOPTS_VALUE, so
I tried to calculate real value using sample_rate value and decoded samples
count. But I suppose that muxer don't want valid pts values for audio
packets it want NOPTS value for correct muxing.

Worked code: http://pastebin.com/zYTiuRyA

But I also have some amount of questions:
1. How to correctly resample audio frames if input and output streams have
different Sample Format, Sample Rate, Channels Count, Channels Layout or
Frame Size? Example wanted...
2. How to correctly synchronize audio and video? In some cases in
transcoded file video and sound not synchronized and I don't understand -
why?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/584826bc/attachment.html>

From cehoyos at ag.or.at Thu Nov 22 11:23:23 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 22 Nov 2012 10:23:23 +0000 (UTC)
Subject: [Libav-user]
=?utf-8?q?Memory_allocation_and_avpicture=5Ffill_inc?=
=?utf-8?q?orrect=3F?=
References: <50ACBE26.5020102@tatapowersed.com>
Message-ID: <loom.20121122T112243-517@post.gmane.org>

Navin <nkipe at ...> writes:

> I was trying out the updated tutorial01 code of
> ffmpeg:https://github.com/mpenkov/ffmpeg-tutorial

Did you look at doc/examples/scaling_video.c?
I believe it contains exactly the code that you
need to do the yuv420p -> rgb24 conversion that
you want.

Carl Eugen


From stefasab at gmail.com Thu Nov 22 11:37:10 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Thu, 22 Nov 2012 11:37:10 +0100
Subject: [Libav-user] Memory allocation and avpicture_fill incorrect?
In-Reply-To: <50ACBE26.5020102@tatapowersed.com>
References: <50ACBE26.5020102@tatapowersed.com>
Message-ID: <20121122103710.GE32328@arborea>

On date Wednesday 2012-11-21 17:12:31 +0530, Navin wrote:
> I was trying out the updated tutorial01 code of
> ffmpeg:https://github.com/mpenkov/ffmpeg-tutorial
> since the program crashed unexpectedly at sws_scale with an "Access
> violation writing location" error, I was trying to figure out what
> went wrong.
> Two questions:

> 1. I looked into the source for avpicture_get_size and saw a lot of
> code that I couldn't understand. Why are such complex operations
> performed in the function with data and linesize, when the width,
> height and pixel format value multiplication should've easily given
> you the number of bytes?

Because that function is generic, and must work with all the supported
pixel formats.

> 2. In avpicture_fill, what exactly are we filling a pFrameRGB with?
> The video isn't yet decoded, so what is getting filled there?
> Looking through the source just left me confused.

That function is used to fill an AVPicture if you have an already
allocated buffer. Alternatively you can use avpicture_alloc() or
av_image_alloc() to allocate *and* fill the data/linesize arrays.

> I'm suspecting that the memory allocation is being done incorrectly,
> which is why sws_scale is crashing. I've written about this in
> another mailing list thread too, titled "get RGB values from ffmpeg
> frame". But this question is specific to understanding the
> allocation and filling.
>
> The code I'm talking about from tutorial01:
> pFrameRGB = avcodec_alloc_frame();
> if (pFrameRGB==NULL) return -1;
>
> // Determine required buffer size and allocate buffer
> numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
> pCodecCtx->height);
> buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
>
> sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
> pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
> PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL );
>
> // Assign appropriate parts of buffer to image planes in
> pFrameRGB. Note that pFrameRGB is an AVFrame, but AVFrame is a
> superset of AVPicture
> avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
> pCodecCtx->width, pCodecCtx->height);

You're not showing the crashing code.
--
FFmpeg = Freak and Friendly Mean Perfectionist Elastic Governor

From mybrokenbeat at gmail.com Thu Nov 22 14:35:32 2012
From: mybrokenbeat at gmail.com (Oleg)
Date: Thu, 22 Nov 2012 15:35:32 +0200
Subject: [Libav-user] How can i capture audio from microphone.
In-Reply-To: <216d243b593632ae357956a9781ed78c$3492f55c@mail3.nate.com>
References: <216d243b593632ae357956a9781ed78c$3492f55c@mail3.nate.com>
Message-ID: <A22C7C70-4704-487D-B039-661857D8327F@gmail.com>

I'm not sure, but char* is only for ASCII characters. Your audio device name is in UTF-8 (?) . Probably, this is where a problem.

22.11.2012, ? 3:46, ??? ???????(?):

> Hi everyone.
>
> I?m beginner in ffmpeg
>
>
> I can capture from webcam in my c code.
>
>
> const char deviceName[] = "video=Sirius USB2.0 Camera";
>
> avformat_open_input(&avFormatContext, deviceName, avInputFomat, NULL);
>
>
> now I want to capture sound from my microphone.
>
>
> const char deviceName[] = "audio=??? ???(2- Microsoft LifeChat L";
>
> avformat_open_input(&avFormatContext, deviceName, avInputFomat, NULL);
>
>
> but this code is fail. Return -5.
>
>
> what is wrong?
>
> this is unicode problem?
>
> my microphone is ok and connected to my computer.
>
>
> and i want to know capture video and audio in same time.
>
>
> I'll really appreciate your help.
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/c755c6e6/attachment.html>

From svetlana.olonetsky at gmail.com Thu Nov 22 16:00:08 2012
From: svetlana.olonetsky at gmail.com (Svetlana Olonetsky)
Date: Thu, 22 Nov 2012 17:00:08 +0200
Subject: [Libav-user] how to set max bit rate in c++
Message-ID: <CAABYDssLLO4ZDuq-CW6Gk7bc4WvzkCe20c_t1nu9gJe7HMVFKQ@mail.gmail.com>

Hi Everyone,

I am trying to encode to h264 and getting very big files.

I media info I can see that output file has following settings:
Bit rate : 13382039
Bit rate : 13.4 Mbps
Nominal bit rate : 2000000
Nominal bit rate : 2 000 Kbps

How can I set max bit rate in c++ ?
Which other parameters can influence output size by factor ?

Thank you in advance,
Svetlana
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/019be539/attachment.html>

From drabner at zoobe.com Thu Nov 22 17:27:56 2012
From: drabner at zoobe.com (Drabner)
Date: Thu, 22 Nov 2012 08:27:56 -0800 (PST)
Subject: [Libav-user] Header missing in mpg,
in spite of using avformat_write_header
Message-ID: <1353601676957-4656101.post@n4.nabble.com>



I am encoding a live rendered video to mpg and/or mp4 (depends on the later
usage of the video) using the ffmpeg C API. When encoding to mp4, everything
is well. But when encoding to mpg, the resulting video cannot be played by
any player. A quick call to ffprobe on it reveals that the header is
missing. But this seems pretty much impossible, as I am explicitly writing
it.

This is how I write the header, before any frame is encoded:

// ptr->oc is the AVFormatContext
int error = avformat_write_header(ptr->oc, NULL);
if (error < 0)
{
s_logFile << "Could not write header. Error: " << error << endl;
fprintf(stderr, "Could not write header. Error: '%i'\n", error);
return 1;
}

There never is any error when writing the header.

The result mpg does work when I "encode" it in an additional step with an
external ffmpeg executable like this: "ffmpeg -i ownEncoded.mpg -sameq -y
working.mpg".
So it seems all the data is there, only the header is missing for some
reason...

I wonder what could be wrong here as I encode mp4 with the exact same
functions, except setting some special values like qmin, qmax, me_method,
etc. when encoding to mp4. Do I probably have to set any special values so
that ffmpeg really does write the header correctly?




--
View this message in context: http://libav-users.943685.n4.nabble.com/Header-missing-in-mpg-in-spite-of-using-avformat-write-header-tp4656101.html
Sent from the libav-users mailing list archive at Nabble.com.

From igorm at stats.com Thu Nov 22 12:30:21 2012
From: igorm at stats.com (Morduhaev, Igor (igorm@stats.com))
Date: Thu, 22 Nov 2012 11:30:21 +0000
Subject: [Libav-user] fps porblem when saving video in mp4 container
Message-ID: <F43D3702A40CBB40A1902FC1AD8999093041BD@SEAVER.stats.local>

When I decode frames from avi file and then decode them in x264 and save to mp4 file, the fps of the output file is always 12,800. Therefore the file is played very fast.

What could be the problem?

Thank you

const char* inFileName = "C:\\000227_C1_GAME.avi";
const char* outFileName = "c:\\test.mp4";
const char* outFileType = "mp4";

av_register_all();

AVFormatContext* inContainer = NULL;
if(avformat_open_input(&inContainer, inFileName, NULL, NULL) < 0)
exit(1);

if(avformat_find_stream_info(inContainer, NULL) < 0)
exit(1);

// Find video stream
int videoStreamIndex = -1;
for (unsigned int i = 0; i < inContainer->nb_streams; ++i)
{
if (inContainer->streams[i] && inContainer->streams[i]->codec &&
inContainer->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoStreamIndex = i;
break;
}
}
if (videoStreamIndex == -1) exit(1);

AVFormatContext* outContainer = NULL;
if(avformat_alloc_output_context2(&outContainer, NULL, outFileType, outFileName) < 0)
exit(1);

// ----------------------------
// Decoder
// ----------------------------
AVStream const *const inStream = inContainer->streams[videoStreamIndex];
AVCodec *const decoder = avcodec_find_decoder(inStream->codec->codec_id);
if(!decoder)
exit(1);
if(avcodec_open2(inStream->codec, decoder, NULL) < 0)
exit(1);

// ----------------------------
// Encoder
// ----------------------------
AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
if(!encoder)
exit(1);
AVStream *outStream = avformat_new_stream(outContainer, encoder);
if(!outStream)
exit(1);

avcodec_get_context_defaults3(outStream->codec, encoder);

// Construct encoder
if(outContainer->oformat->flags & AVFMT_GLOBALHEADER)
outStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;

outStream->codec->coder_type = AVMEDIA_TYPE_VIDEO;
outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
outStream->codec->width = inStream->codec->width;
outStream->codec->height = inStream->codec->height;
outStream->codec->codec_id = encoder->id;
outStream->codec->bit_rate = 400000;
outStream->codec->time_base.den = 25;
outStream->codec->time_base.num = 1;
outStream->codec->gop_size = 250; // Keyframe interval(=GOP length). Determines maximum distance distance between I-frames
outStream->codec->keyint_min = 25; // minimum GOP size
outStream->codec->max_b_frames = 16; // maximum number of B-frames between non-B-frames
outStream->codec->b_frame_strategy = 1; // decides the best number of B-frames to use. Default mode in x264.
outStream->codec->scenechange_threshold = 40;
outStream->codec->refs = 6; // abillity to reference frames other than the one immediately prior to the current frame. specify how many references can be used.
outStream->codec->qmin = 10;
outStream->codec->qmax = 51;
outStream->codec->i_quant_factor = 0.71;

if(outStream->codec->codec_id == AV_CODEC_ID_H264)
av_opt_set(outStream->codec->priv_data, "preset", "slow", 0);

// Open encoder
if(avcodec_open2(outStream->codec, encoder, NULL) < 0)
exit(1);

// Open output container
if(avio_open(&outContainer->pb, outFileName, AVIO_FLAG_WRITE) < 0)
exit(1);

AVFrame *decodedFrame = avcodec_alloc_frame();
if(!decodedFrame)
exit(1);
AVFrame *encodeFrame = avcodec_alloc_frame();
if(!encodeFrame)
exit(1);
encodeFrame->format = outStream->codec->pix_fmt;
encodeFrame->width = outStream->codec->width;
encodeFrame->height = outStream->codec->height;
if(av_image_alloc(encodeFrame->data, encodeFrame->linesize,
outStream->codec->width, outStream->codec->height,
outStream->codec->pix_fmt, 1) < 0)
exit(1);

av_dump_format(inContainer, 0, inFileName,0);

//Write header to ouput container
avformat_write_header(outContainer, NULL);

AVPacket decodePacket, encodedPacket;
int got_frame, len;
while(av_read_frame(inContainer, &decodePacket)>=0)
{
if (decodePacket.stream_index == videoStreamIndex)
{
len = avcodec_decode_video2(inStream->codec, decodedFrame, &got_frame, &decodePacket);
if(len < 0)
exit(1);
if(got_frame)
{
av_init_packet(&encodedPacket);
encodedPacket.data = NULL;
encodedPacket.size = 0;
if(avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) < 0)
exit(1);
if(got_frame)
{
if (outStream->codec->coded_frame->key_frame)
encodedPacket.flags |= AV_PKT_FLAG_KEY;

encodedPacket.stream_index = outStream->index;

if(av_interleaved_write_frame(outContainer, &encodedPacket) < 0)
exit(1);

av_free_packet(&encodedPacket);
}
}
}

av_free_packet(&decodePacket);
}
av_write_trailer(outContainer);
avio_close(outContainer->pb);

avcodec_free_frame(&encodeFrame);
avcodec_free_frame(&decodedFrame);

avformat_free_context(outContainer);
av_close_input_file(inContainer);

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/dc10f0a4/attachment.html>

From cehoyos at ag.or.at Thu Nov 22 22:59:58 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 22 Nov 2012 21:59:58 +0000 (UTC)
Subject: [Libav-user] fps porblem when saving video in mp4 container
References: <F43D3702A40CBB40A1902FC1AD8999093041BD@SEAVER.stats.local>
Message-ID: <loom.20121122T225846-760@post.gmane.org>

Morduhaev, Igor (igorm at ... <igorm at ...> writes:

> When I decode frames from avi file and then decode them in
> x264 and save to mp4 file, the fps of the output file is
> always 12,800. Therefore the file is played very fast.
>
> What could be the problem?

Your code does not compile, it is therefore difficult to test...

You could test with ffmpeg (the application) and let the
libraries print the values you set in your code to find
out which one is wrong.

Carl Eugen


From hannes.wuerfel at student.hpi.uni-potsdam.de Thu Nov 22 23:25:16 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Thu, 22 Nov 2012 23:25:16 +0100
Subject: [Libav-user] Troubles with transcoding
In-Reply-To: <CAP9VioeNu3LvN9wxxMGdthiTrGRK5P5ypT+JJj5qWY5UFCbY9g@mail.gmail.com>
References: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>
<50ACBA27.10202@student.hpi.uni-potsdam.de>
<CAP9VioeNu3LvN9wxxMGdthiTrGRK5P5ypT+JJj5qWY5UFCbY9g@mail.gmail.com>
Message-ID: <50AEA64C.7000206@student.hpi.uni-potsdam.de>

Am 22.11.2012 04:32, schrieb hatred:
> Hi, Hannes!
>
> My comments below. Worked code also.
>
> 2012/11/21 Hannes Wuerfel <hannes.wuerfel at student.hpi.uni-potsdam.de
> <mailto:hannes.wuerfel at student.hpi.uni-potsdam.de>>
>
> Am 21.11.2012 07:42, schrieb hatred:
>
> In my case, I'd like to manipulate video frames of the file with
> some image processing kernels etc. and write the file with the
> same codecs they have back to disc, but of cause with processed
> frames.
> So basically I need a "transcoder" from the codec to the same codec.
>
> What I didn't have is your code fragment:
> "if (outCtx.videoCodecCtx->coded_frame->pts != AV_NOPTS_VALUE)
> {
> outPkt.pts =
> av_rescale_q(outCtx.videoCodecCtx->coded_frame->pts,
> outCtx.videoCodecCtx->time_base,
> outCtx.videoStream->time_base);
> }"
>
>
> In common way Stream time base and CodecContext time base is not
> equal, but regard comments in AVPacket structure pts field must be in
> Stream time base units and I rescale it before muxing
>
>
> If I put it into my code almost all or no video frames are encoded
> succesfully and the output is:
> [libx264 @ 0000000002620080] specified frame type (3) at 1314 is
> not compatible with keyframe interval
> [mp4 @ 00000000034ff080] pts (0) < dts (1352000) in stream 0
> Error while writing video frame
>
>
> It is my error: we should rescale dts field also if it is not equal to
> AV_NOPTS.
>
> I looked into ffmpeg.c as well but I had no time to step threw it
> while transcoding, because I'm currently working on a windows machine.
> Perhaps you can explane to me why we have to rescale the
> presentation time stamp...
>
>
> See comments above. For encoding we can set same time base to stream
> and codec context and remove pts/dts rescaling.
>
> Ah and you've rescaled the audio too. I did not rescale the audio
> but for every input codec I take, audio is always fine, even withouth:
>
> "if (frame->pts == AV_NOPTS_VALUE)
> {
> AVRational samplesRateInv = {1,
> outCtx.audioCodecCtx->sample_rate};
> int64_t pts = inputAudioSamples;
> frame->pts = av_rescale_q(pts, samplesRateInv,
> inCtx.audioCodecCtx->time_base);
> }
> inputAudioSamples += frame->nb_samples;"
>
> Why is this?
>
>
> This part of code is root couse of trouble with sound encoding and
> muxing. After deconding audio samples pts value always equal to
> AV_NOPTS_VALUE, so I tried to calculate real value using sample_rate
> value and decoded samples count. But I suppose that muxer don't want
> valid pts values for audio packets it want NOPTS value for correct muxing.
>
> Worked code: http://pastebin.com/zYTiuRyA
>
> But I also have some amount of questions:
> 1. How to correctly resample audio frames if input and output streams
> have different Sample Format, Sample Rate, Channels Count, Channels
> Layout or Frame Size? Example wanted...
> 2. How to correctly synchronize audio and video? In some cases in
> transcoded file video and sound not synchronized and I don't
> understand - why?
Good job man. This works for perfectly for me. But I discovered the same
when transcoding from some codec to another.
Now there is only one thing different in my code.
As in the demuxing.c example I'm using something like this after the
trancoding loop to flush the remaining cached frames:
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);

Perhaps this could help for your synchronization problem of cached frames?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121122/ebad0138/attachment.html>

From hannes.wuerfel at student.hpi.uni-potsdam.de Thu Nov 22 23:35:02 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Thu, 22 Nov 2012 23:35:02 +0100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
Message-ID: <50AEA896.3040302@student.hpi.uni-potsdam.de>

Hey list,

is there a workaround to avoid name conflicts with the standard c time.h
file?
It is a problem when using some other libs in the project where ffmpeg
is used.

From adrozdoff at gmail.com Thu Nov 22 23:52:43 2012
From: adrozdoff at gmail.com (hatred)
Date: Fri, 23 Nov 2012 09:52:43 +1100
Subject: [Libav-user] how to set max bit rate in c++
In-Reply-To: <CAABYDssLLO4ZDuq-CW6Gk7bc4WvzkCe20c_t1nu9gJe7HMVFKQ@mail.gmail.com>
References: <CAABYDssLLO4ZDuq-CW6Gk7bc4WvzkCe20c_t1nu9gJe7HMVFKQ@mail.gmail.com>
Message-ID: <CAP9VioewUeUpwD9s-T=FfbcgCkVcx3QpAf822k2YorY2V3maRg@mail.gmail.com>

Hi Svetlana,

Try this params:
AVCodecContext::rc_min_rate - minimum bitrate
AVCodecContext::rc_max_rate - maximum bitrate
AVCodecContext::bit_rate - the average bitrate

And see preset files for h264 encoding.

2012/11/23 Svetlana Olonetsky <svetlana.olonetsky at gmail.com>

> Hi Everyone,
>
> I am trying to encode to h264 and getting very big files.
>
> I media info I can see that output file has following settings:
> Bit rate : 13382039
> Bit rate : 13.4 Mbps
> Nominal bit rate : 2000000
> Nominal bit rate : 2 000 Kbps
>
> How can I set max bit rate in c++ ?
> Which other parameters can influence output size by factor ?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/90025efc/attachment.html>

From cehoyos at ag.or.at Thu Nov 22 23:52:47 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 22 Nov 2012 22:52:47 +0000 (UTC)
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
Message-ID: <loom.20121122T235158-286@post.gmane.org>

Hannes Wuerfel <hannes.wuerfel at ...> writes:

> is there a workaround to avoid name conflicts with
> the standard c time.h file?

> It is a problem when using some other libs in
> the project where ffmpeg is used.

Could you elaborate?
(So far, nobody could explain how this can be a
problem.)

Carl Eugen


From hannes.wuerfel at student.hpi.uni-potsdam.de Fri Nov 23 00:25:49 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Fri, 23 Nov 2012 00:25:49 +0100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
In-Reply-To: <loom.20121122T235158-286@post.gmane.org>
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
Message-ID: <50AEB47D.3010701@student.hpi.uni-potsdam.de>

Am 22.11.2012 23:52, schrieb Carl Eugen Hoyos:
> Hannes Wuerfel <hannes.wuerfel at ...> writes:
>
>> is there a workaround to avoid name conflicts with
>> the standard c time.h file?
>> It is a problem when using some other libs in
>> the project where ffmpeg is used.
> Could you elaborate?
> (So far, nobody could explain how this can be a
> problem.)
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
For example:

// ffmpeg
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libavutil/opt.h>
#include <libavutil/audioconvert.h>
#include <libavutil/common.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavutil/samplefmt.h>
}

// opencv
#include <cv.h>
#include <highgui.h>
#include <cvaux.h>
#include <cxcore.h>

causing:
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(18): error C2039: 'clock_t': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(18): error C2873: 'clock_t': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2039: 'asctime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2873: 'asctime': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2039: 'clock': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2873: 'clock': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2039: 'ctime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(20): error C2873: 'ctime': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2039: 'difftime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2873: 'difftime': Das Symbol kann
nicht in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2039: 'gmtime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2873: 'gmtime': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2039: 'localtime': Ist kein Element
von '`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(21): error C2873: 'localtime': Das Symbol kann
nicht in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2039: 'mktime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2873: 'mktime': Das Symbol kann nicht
in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2039: 'strftime': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2873: 'strftime': Das Symbol kann
nicht in einer using-Deklaration verwendet werden
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2039: 'time': Ist kein Element von
'`global namespace''
2>C:\Program Files (x86)\Microsoft Visual Studio
10.0\VC\include\ctime(22): error C2873: 'time': Das Symbol kann nicht in
einer using-Deklaration verwendet werden
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(47): error
C2146: Syntaxfehler: Fehlendes ';' vor Bezeichner 'startTime'
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(47): error
C4430: Fehlender Typspezifizierer - int wird angenommen. Hinweis:
"default-int" wird von C++ nicht unterst?tzt.
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(47): error
C4430: Fehlender Typspezifizierer - int wird angenommen. Hinweis:
"default-int" wird von C++ nicht unterst?tzt.
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(69): error
C2065: 'startTime': nichtdeklarierter Bezeichner
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(69): error
C3861: "clock": Bezeichner wurde nicht gefunden.
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(77): error
C2065: 'clock_t': nichtdeklarierter Bezeichner
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(77): error
C2146: Syntaxfehler: Fehlendes ';' vor Bezeichner 'stopTime'
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(77): error
C2065: 'stopTime': nichtdeklarierter Bezeichner
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(77): error
C3861: "clock": Bezeichner wurde nicht gefunden.
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(78): error
C2065: 'stopTime': nichtdeklarierter Bezeichner
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(78): error
C2065: 'startTime': nichtdeklarierter Bezeichner
2>d:\lib\opencv_2.4\build\include\opencv2\flann\timer.h(78): error
C2065: 'CLOCKS_PER_SEC': nichtdeklarierter Bezeichner

opencv2's timer.h includes <time.h>

Strange:
When I just code:

// ffmpeg
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libavutil/opt.h>
#include <libavutil/audioconvert.h>
#include <libavutil/common.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavutil/samplefmt.h>
}
#include <time.h>

it works fine, but using <ctime> instead causing the same VC errors as
above.


From cehoyos at ag.or.at Fri Nov 23 00:42:16 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 22 Nov 2012 23:42:16 +0000 (UTC)
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
Message-ID: <loom.20121123T003549-726@post.gmane.org>

Hannes Wuerfel <hannes.wuerfel at ...> writes:

> extern "C" {
> #include <libavcodec/avcodec.h>
> #include <libavformat/avformat.h>
> #include <libswscale/swscale.h>
> #include <libavutil/opt.h>
> #include <libavutil/audioconvert.h>
> #include <libavutil/common.h>
> #include <libavutil/imgutils.h>
> #include <libavutil/mathematics.h>
> #include <libavutil/samplefmt.h>
> }
>
> // opencv
> #include <cv.h>
> #include <highgui.h>
> #include <cvaux.h>
> #include <cxcore.h>
>
> causing:
> 2>C:\Program Files (x86)\Microsoft Visual Studio
> 10.0\VC\include\ctime(18): error C2039: 'clock_t':
> Ist kein Element von '`global namespace''

(Unrelated: Am I the only one who feels that above
line is simply insufficient for any useful debugging?)

Sorry if I miss something, but there iirc there is no
sourcefile in FFmpeg called "ctime"...

[...]

> opencv2's timer.h includes <time.h>

Which can never include libavutil/time.h if your
include path is correct: It must not contain
any of the library directories.

Please cut your quotes, please do not remove
empty lines from your quotes, they (always)
ease reading.

Carl Eugen


From adrozdoff at gmail.com Fri Nov 23 01:38:27 2012
From: adrozdoff at gmail.com (hatred)
Date: Fri, 23 Nov 2012 11:38:27 +1100
Subject: [Libav-user] Troubles with transcoding
In-Reply-To: <50AEA64C.7000206@student.hpi.uni-potsdam.de>
References: <CAP9ViofDjmjK2dxNMPu7c-PQbCRwTCx25vW_Ctnp34=xeDdGDw@mail.gmail.com>
<50ACBA27.10202@student.hpi.uni-potsdam.de>
<CAP9VioeNu3LvN9wxxMGdthiTrGRK5P5ypT+JJj5qWY5UFCbY9g@mail.gmail.com>
<50AEA64C.7000206@student.hpi.uni-potsdam.de>
Message-ID: <CAP9Viofj9MrGKdPhPRnTWBdo1bjSg=oX1umMeK3nfeQcoYC+cA@mail.gmail.com>

2012/11/23 Hannes Wuerfel <hannes.wuerfel at student.hpi.uni-potsdam.de>

> Good job man. This works for perfectly for me.
>

Thanks :-)


> But I discovered the same when transcoding from some codec to another.
> Now there is only one thing different in my code.
> As in the demuxing.c example I'm using something like this after the
> trancoding loop to flush the remaining cached frames:
> /* flush cached frames */
> pkt.data = NULL;
> pkt.size = 0;
> do {
> decode_packet(&got_frame, 1);
> } while (got_frame);
>
> Perhaps this could help for your synchronization problem of cached frames?
>

It is not actual for me: I write live transcoding server for our IP cams
and I don't know when EOF will be occured so I newer do packet flush and
trailer write.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/b744efa5/attachment.html>

From adrozdoff at gmail.com Fri Nov 23 01:48:26 2012
From: adrozdoff at gmail.com (hatred)
Date: Fri, 23 Nov 2012 11:48:26 +1100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
In-Reply-To: <50AEB47D.3010701@student.hpi.uni-potsdam.de>
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
Message-ID: <CAP9ViocpDfO_fNnFiZ-7WGuvae5XFwzzXZXquc7fFyMYAht08A@mail.gmail.com>

Hi Hannes,


Seems like offtopic, but why do you not use C++ interface of opencv?

like:
#include <cv.hpp>
instead
#include <cv.h>

more info: http://docs.opencv.org/modules/refman.html and
http://docs.opencv.org/modules/core/doc/intro.html#api-concepts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/cba7f2d9/attachment.html>

From hannes.wuerfel at student.hpi.uni-potsdam.de Fri Nov 23 03:09:53 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Fri, 23 Nov 2012 03:09:53 +0100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
In-Reply-To: <loom.20121123T003549-726@post.gmane.org>
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
<loom.20121123T003549-726@post.gmane.org>
Message-ID: <50AEDAF1.3090600@student.hpi.uni-potsdam.de>

Ah, perhaps my fault giving you compile errors in german.
I don't think that you've understood me right.
My problem is simple. I'd like to use ffmpeg in a project with some more
lib dependencies, for example opencv, cuda libs qt, open scenegraph or
whatever.
I've also wrote a simple start stop timer class for time measurement
that is using the c++ #include for time.h which is <ctime>.

Combining them all together causes name conflicts. The same happened to
me in earlier day of SDL which also (in my opinion) made the mistake to
name a file time.h.
Well for my simple start stop timer it's fine, i could write platform
dependent code but using time.h is common I think.
I was wondering if anybody stumbled upon this issue and can provide me a
workaround or can guide me in the right direction if I'm doing something
wrong.

My situation now is that I'm unable to use libavutil with opencv, osg or
my timer class.
The only thing I found on the web was an osg contributor who discovered
the same problem:

http://forum.openscenegraph.org/viewtopic.php?t=11190
"FindFFmpeg.cmake has been changed in order to support the new header
include format for FFmpeg. In the 1.0 release, a new file had been added
with the name ?time.h? in the avutil library. The previous method of
adding includes caused conflicts with the ANSI C ?time.h? file. Now the
include directive will only use the main include folder. All files using
the old include format have been updated to reflect the change."

Many thanks in advance
Hannes

From hannes.wuerfel at student.hpi.uni-potsdam.de Fri Nov 23 03:19:00 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Fri, 23 Nov 2012 03:19:00 +0100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
In-Reply-To: <CAP9ViocpDfO_fNnFiZ-7WGuvae5XFwzzXZXquc7fFyMYAht08A@mail.gmail.com>
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
<CAP9ViocpDfO_fNnFiZ-7WGuvae5XFwzzXZXquc7fFyMYAht08A@mail.gmail.com>
Message-ID: <50AEDD14.3000502@student.hpi.uni-potsdam.de>

> Seems like offtopic, but why do you not use C++ interface of opencv?
>
> like:
> #include <cv.hpp>
> instead
> #include <cv.h>
Hi,
no real offtopic, I was actually playing around with it and wanted to
"port" it to the c++ interface when I find time for it.
But nevertheless, it does not change anything related to the name conflict.

Thx.

From nicolas.george at normalesup.org Fri Nov 23 15:28:03 2012
From: nicolas.george at normalesup.org (Nicolas George)
Date: Fri, 23 Nov 2012 15:28:03 +0100
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
In-Reply-To: <50AEB47D.3010701@student.hpi.uni-potsdam.de>
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
Message-ID: <20121123142803.GA28606@phare.normalesup.org>

Le tridi 3 frimaire, an CCXXI, Hannes Wuerfel a ?crit?:
> causing:
> 2>C:\Program Files (x86)\Microsoft Visual Studio

You should show the compilation options too, otherwise there is no way of
telling what is wrong.

Regards,

--
Nicolas George

From cehoyos at ag.or.at Fri Nov 23 16:09:45 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Fri, 23 Nov 2012 15:09:45 +0000 (UTC)
Subject: [Libav-user] Problems with time.h in ffmpeg util libary
References: <50AEA896.3040302@student.hpi.uni-potsdam.de>
<loom.20121122T235158-286@post.gmane.org>
<50AEB47D.3010701@student.hpi.uni-potsdam.de>
<loom.20121123T003549-726@post.gmane.org>
<50AEDAF1.3090600@student.hpi.uni-potsdam.de>
Message-ID: <loom.20121123T160212-97@post.gmane.org>

Hannes Wuerfel <hannes.wuerfel at ...> writes:

> Ah, perhaps my fault giving you compile errors in german.

Given that German is my native language, this is hopefully not
the problem;-)
(If you mean my rant about the MSVC error message: It is missing
the reason - source code line - why a specific header was
included, this would imo not only make this issue easier to
understand.)

Allow me to try again:
You report that (current) FFmpeg cannot be compiled in projects
that also include certain other libraries.
This would be very a important (critical) issue that has to be
ixed.
But the first step is that FFmpeg developers must be able to
reproduce the problem. That probably means that you should
provide a minimal source file and either a Makefile or -
preferably - a compilation command that allows to reproduce
the compilation problem.
(The solution is of course simple, but since it includes a major
version bump it is extremely ulikely to happen without a
reproducible report.)

Please note that the problem has been reported before, so far the
issue was that incorrect header inclusion flags were given to the
compiler.

Carl Eugen


From drabner at zoobe.com Fri Nov 23 16:35:51 2012
From: drabner at zoobe.com (Jan drabner)
Date: Fri, 23 Nov 2012 16:35:51 +0100
Subject: [Libav-user] av_write_frame fails when encoding a larger audio file
to .mpg
Message-ID: <CANaDcjMTAROuFzpXi7+07Qd9SVw4tr8q=vo+QN0wZmBCtD+kzQ@mail.gmail.com>

Hey,

I am encoding live rendered video data and an existing .wav file into an
mpg-file.

To do that I first write all audio frames, and then the video frames as
they come in from the render engine. For smaller .wav files (< 25 seconds),
everything works perfectly fine. But as soon as I use a longer .wav file,
av_write_frame (when writing the audio frame) just returns -1 after having
written some 100 frames. It is never the same frame at which it fails, also
it is never the last frame. All test .wav files can be played perfectly
with any player I tested.

I am following the muxing
example<http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html>(more or
less).

Here is my function that writes an audio frame:

void write_audio_frame( Cffmpeg_dll * ptr, AVFormatContext *oc,
AVStream *st, int16_t sample_val )
{
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame = avcodec_alloc_frame();
int got_packet;

av_init_packet(&pkt);
c = st->codec;

get_audio_frame(ptr, ptr->samples, ptr->audio_input_frame_size, c->channels);
frame->nb_samples = ptr->audio_input_frame_size;
int result = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
(uint8_t *) ptr->samples,
ptr->audio_input_frame_size *
av_get_bytes_per_sample(c->sample_fmt) *
c->channels, 0);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error filling audio frame. Code: %i\n", result);
exit(1);
}

result = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error encoding audio. Code: %i\n", result);
exit(1);
}


if (c->coded_frame && c->coded_frame->pts != AV_NOPTS_VALUE)
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
pkt.flags |= AV_PKT_FLAG_KEY;

pkt.stream_index= st->index;
av_log(c, AV_LOG_ERROR, "Got? %i Pts: %i Dts: %i Flags: %i Side Elems:
%i Size: %i\n",
got_packet, pkt.pts, pkt.dts, pkt.flags, pkt.side_data_elems, pkt.size);

/* write the compressed frame in the media file */
result = av_write_frame(oc, &pkt);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error while writing audio frame. Result:
%i\n", result);
exit(1);
}

}

And here is my get_audio_frame function:

void get_audio_frame( Cffmpeg_dll* ptr, int16_t* samples, int
frame_size, int nb_channels, int16_t sample_val )
{
fread( samples, sizeof( int16_t ), frame_size * nb_channels,
ptr->fp_sound_input );
};

And finally, this is the loop in which I write all audio frames (don't
worry about the .wav header, I skipped it before that loop):

while (!feof(ptr->fp_sound_input))
{
write_audio_frame( ptr, ptr->oc, ptr->audio_st, -1 );
}

As you can see, I'm outputting almost everything in the packet and check
for any possible error. Other than av_write_frame failing after some time
when I am encoding a longer audio file, everything seems perfectly fine.
All the packet values I am tracking are 100% the same for all frames
(except the data pointer, obviously). Also, as stated, the same procedure
works flawlessly for shorter fp_sound_input files. avcodec_encode_audio2()
and avcodec_fill_audio_frame() also never fail.

The codecs I use for encoding are CODEC_ID_MPEG2VIDEO (video) and
CODEC_ID_MP2 (audio). The .wav files are saved in PCM 16 LE (all use the
exact same encoding).

What could be wrong here?


--
Jan Drabner ? TD Programming Engine & Animation



zoobe message entertainment gmbh

kurf?rstendamm 226 l 10719 berlin



email: drabner at zoobe.com <nachname at zoobe.com> l mob: 0172 7017640



gesch?ftsf?hrer: lenard f. krawinkel

tel: +49 30. 288 838 88 l site: *www.zoobe.com* <http://www.zoobe.com/> ?
email: *info at zoobe.com* <info at zoobe.com>



amtsgericht charlottenburg, berlin ? hrb-nr. 11 42 79 b
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/540d9083/attachment.html>

From atlithorn at gmail.com Fri Nov 23 19:21:32 2012
From: atlithorn at gmail.com (Atli Thorbjornsson)
Date: Fri, 23 Nov 2012 18:21:32 +0000
Subject: [Libav-user] Simple demux and copy codec
Message-ID: <CAAobVz+2L+Caxs2yuVcGf9swJUkJzNV5sEHue66oCb57GtrUXg@mail.gmail.com>

I am trying to use ffmpeg in Android as a demuxer for FLV contained audio.

As a proof of concept I wrote a barebones demuxer->muxer in C that
essentially tries to mimic this ffmpeg command

ffmpeg -i input.flv -acodec copy output.aac

The output of ffmpeg obviously works perfectly. But my program's output
differs by a few bytes, I think it's the ADTS header or something.
My program can also do FLV contained MP3 and even though my program's
output still differs by a few bytes, my output is playable.

Only too happy to share the entire source if someone has the time to wade
through it but the basic functionality is like so:

init a whole lot of stuff
...
avformat_write_header(outputFormatCtx,NULL);
while(av_read_frame(inputFormatCtx, &inputPacket))
if(inputPacket.stream_index == audioStreamIndex)
av_interleaved_write_frame(outputFormatCtx, inputPacket)
av_write_trailer(outputFormatCtx);


I've been wading through ffmpeg.c to try and figure out what I'm missing
but no luck as of yet.
I'm guessing I should add some options to avformat_write_header instead of
NULL but I haven't been able to figure out what.

Many thanks,
Atli.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/798ce367/attachment.html>

From atlithorn at gmail.com Fri Nov 23 23:09:05 2012
From: atlithorn at gmail.com (Atli Thorbjornsson)
Date: Fri, 23 Nov 2012 22:09:05 +0000
Subject: [Libav-user] Simple demux and copy codec
In-Reply-To: <CAAobVz+2L+Caxs2yuVcGf9swJUkJzNV5sEHue66oCb57GtrUXg@mail.gmail.com>
References: <CAAobVz+2L+Caxs2yuVcGf9swJUkJzNV5sEHue66oCb57GtrUXg@mail.gmail.com>
Message-ID: <CAAobVzLQo6YraFrs1dq2w8O4N5zMG+s+TNNZuO+5EEXfTVHz8g@mail.gmail.com>

I was right, I was missing options so I add the following to the init phase

opts = setup_find_stream_info_opts(iformatCtx, codec_opts);
err = avformat_find_stream_info(iformatCtx,opts);

instead of just

err = avformat_find_stream_info(iformatCtx,NULL)

And then

avformat_write_header(outputFormatCtx,opts)

That fixed my AAC output, it's now identical to ffmpeg's but it broke
MP3... I get the following error

[mp3 @ 0x7f9c73060c00] Invalid audio stream. Exactly one MP3 audio stream
is required.
failed to write header: Invalid argument

Almost there ;)


On 23 November 2012 18:21, Atli Thorbjornsson <atlithorn at gmail.com> wrote:

> I am trying to use ffmpeg in Android as a demuxer for FLV contained audio.
>
> As a proof of concept I wrote a barebones demuxer->muxer in C that
> essentially tries to mimic this ffmpeg command
>
> ffmpeg -i input.flv -acodec copy output.aac
>
> The output of ffmpeg obviously works perfectly. But my program's output
> differs by a few bytes, I think it's the ADTS header or something.
> My program can also do FLV contained MP3 and even though my program's
> output still differs by a few bytes, my output is playable.
>
> Only too happy to share the entire source if someone has the time to wade
> through it but the basic functionality is like so:
>
> init a whole lot of stuff
> ...
> avformat_write_header(outputFormatCtx,NULL);
> while(av_read_frame(inputFormatCtx, &inputPacket))
> if(inputPacket.stream_index == audioStreamIndex)
> av_interleaved_write_frame(outputFormatCtx, inputPacket)
> av_write_trailer(outputFormatCtx);
>
>
> I've been wading through ffmpeg.c to try and figure out what I'm missing
> but no luck as of yet.
> I'm guessing I should add some options to avformat_write_header instead of
> NULL but I haven't been able to figure out what.
>
> Many thanks,
> Atli.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/d9d2fba3/attachment.html>

From atlithorn at gmail.com Sat Nov 24 00:54:27 2012
From: atlithorn at gmail.com (Atli Thorbjornsson)
Date: Fri, 23 Nov 2012 23:54:27 +0000
Subject: [Libav-user] Simple demux and copy codec
In-Reply-To: <CAAobVzLQo6YraFrs1dq2w8O4N5zMG+s+TNNZuO+5EEXfTVHz8g@mail.gmail.com>
References: <CAAobVz+2L+Caxs2yuVcGf9swJUkJzNV5sEHue66oCb57GtrUXg@mail.gmail.com>
<CAAobVzLQo6YraFrs1dq2w8O4N5zMG+s+TNNZuO+5EEXfTVHz8g@mail.gmail.com>
Message-ID: <CAAobVz+oETWqv15mHYqkRCjcFfugkm98vWmvd+TAevXRkRrLiw@mail.gmail.com>

I'll just keep answering myself, it was all because of bad init code.
Thanks for being the teddy bear ;)


On 23 November 2012 22:09, Atli Thorbjornsson <atlithorn at gmail.com> wrote:

> I was right, I was missing options so I add the following to the init phase
>
> opts = setup_find_stream_info_opts(iformatCtx, codec_opts);
> err = avformat_find_stream_info(iformatCtx,opts);
>
> instead of just
>
> err = avformat_find_stream_info(iformatCtx,NULL)
>
> And then
>
> avformat_write_header(outputFormatCtx,opts)
>
> That fixed my AAC output, it's now identical to ffmpeg's but it broke
> MP3... I get the following error
>
> [mp3 @ 0x7f9c73060c00] Invalid audio stream. Exactly one MP3 audio stream
> is required.
> failed to write header: Invalid argument
>
> Almost there ;)
>
>
> On 23 November 2012 18:21, Atli Thorbjornsson <atlithorn at gmail.com> wrote:
>
>> I am trying to use ffmpeg in Android as a demuxer for FLV contained audio.
>>
>> As a proof of concept I wrote a barebones demuxer->muxer in C that
>> essentially tries to mimic this ffmpeg command
>>
>> ffmpeg -i input.flv -acodec copy output.aac
>>
>> The output of ffmpeg obviously works perfectly. But my program's output
>> differs by a few bytes, I think it's the ADTS header or something.
>> My program can also do FLV contained MP3 and even though my program's
>> output still differs by a few bytes, my output is playable.
>>
>> Only too happy to share the entire source if someone has the time to wade
>> through it but the basic functionality is like so:
>>
>> init a whole lot of stuff
>> ...
>> avformat_write_header(outputFormatCtx,NULL);
>> while(av_read_frame(inputFormatCtx, &inputPacket))
>> if(inputPacket.stream_index == audioStreamIndex)
>> av_interleaved_write_frame(outputFormatCtx, inputPacket)
>> av_write_trailer(outputFormatCtx);
>>
>>
>> I've been wading through ffmpeg.c to try and figure out what I'm missing
>> but no luck as of yet.
>> I'm guessing I should add some options to avformat_write_header instead
>> of NULL but I haven't been able to figure out what.
>>
>> Many thanks,
>> Atli.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121123/2442a498/attachment.html>

From lixiaohui9219 at 126.com Sat Nov 24 17:43:37 2012
From: lixiaohui9219 at 126.com (=?GBK?B?wOjQobvU?=)
Date: Sun, 25 Nov 2012 00:43:37 +0800 (CST)
Subject: [Libav-user] rtmp pause and seek based on librtmp failed!!!
Message-ID: <319909bd.23abc.13b334d89da.Coremail.lixiaohui9219@126.com>

Hi,
My multimediaplayer based on ffmpeg. To implement rtmp pause and seek, I add librtmp support in the FFMPEG. But rmtp pause and resume cause program crash for 104 error when write soket(No pause notify form server). And video would still and exit play after a while when do the seek(has seek notify). Please help me If you known the reasons about it . Thank you!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/4c6cd15d/attachment.html>

From sender at ImageMining.net Sun Nov 25 02:47:04 2012
From: sender at ImageMining.net (Jim Morgenstern)
Date: Sat, 24 Nov 2012 20:47:04 -0500
Subject: [Libav-user] installing for Windows
Message-ID: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>

A stupid, na?ve question that has probably been answered before but I would
appreciate a reply:



I downloaded
<http://ffmpeg.zeranoe.com/builds/win64/dev/ffmpeg-20121120-git-8b6aeb1-win6
4-dev.7z> FFmpeg git-8b6aeb1 64-bit Dev (Latest) from Zeranoe



I want to use FFmpeg on Win 7. Is there an install process ?



should I rename the dll s ?? [e.g. I have: libavcodec.dll.a ]

have I downloaded the wrong file ??



thanks







Jim Morgenstern

Image Mining LLC

248-252-2626



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121124/c876c053/attachment.html>

From moiein2000 at gmail.com Sun Nov 25 10:08:40 2012
From: moiein2000 at gmail.com (Matthew Einhorn)
Date: Sun, 25 Nov 2012 04:08:40 -0500
Subject: [Libav-user] installing for Windows
In-Reply-To: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
Message-ID: <CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>

On Sat, Nov 24, 2012 at 8:47 PM, Jim Morgenstern <sender at imagemining.net>wrote:

> A stupid, na?ve question that has probably been answered before but I
> would appreciate a reply:****
>
> ** **
>
> I downloaded FFmpeg git-8b6aeb1 64-bit Dev (Latest) from Zeranoe****
>
> ** **
>
> I want to use FFmpeg on Win 7. Is there an install process ? ****
>
> ** **
>
> should I rename the dll s ?? [e.g. I have: libavcodec.dll.a ]****
>
> have I downloaded the wrong file ??
>

>From your question I assume you probably want the static build instead:
http://ffmpeg.zeranoe.com/builds/win64/static/ffmpeg-20121120-git-8b6aeb1-win64-static.7z

In there you'll find among others an ffmpeg.exe file, which you use from
the command line. There's no installation involved.

Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/bcb48252/attachment.html>

From rjvbertin at gmail.com Sun Nov 25 10:15:36 2012
From: rjvbertin at gmail.com (RJV Bertin)
Date: Sun, 25 Nov 2012 10:15:36 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
Message-ID: <35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>

Related question: is there a build available that provides the libraries in the format required for use with MSVC?

From moiein2000 at gmail.com Sun Nov 25 10:38:37 2012
From: moiein2000 at gmail.com (Matthew Einhorn)
Date: Sun, 25 Nov 2012 04:38:37 -0500
Subject: [Libav-user] installing for Windows
In-Reply-To: <35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
Message-ID: <CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>

On Sun, Nov 25, 2012 at 4:15 AM, RJV Bertin <rjvbertin at gmail.com> wrote:

> Related question: is there a build available that provides the libraries
> in the format required for use with MSVC?
>

Ahh, if you want to use the libraries from MSVC you need the dev (which you
linked in your original email) and shared (
http://ffmpeg.zeranoe.com/builds/win64/shared/ffmpeg-20121120-git-8b6aeb1-win64-shared.7z)
builds.

The shared build provides the dlls that you need to load. The dev build
provides the include header files as well as stub .lib files that you can
use to implicitly load the dll files. If you're using visual studio, you
add the .lib files to the configuration and that will make all the
functions in the dlls available to you.

Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/d0460a8a/attachment.html>

From rjvbertin at gmail.com Sun Nov 25 10:42:30 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Sun, 25 Nov 2012 10:42:30 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
Message-ID: <4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>

Wasn't my original email but thanks :)

There are no static libraries to link with from MSVC?

Ren?

On Nov 25, 2012, at 10:38, Matthew Einhorn wrote:

> On Sun, Nov 25, 2012 at 4:15 AM, RJV Bertin <rjvbertin at gmail.com> wrote:
> Related question: is there a build available that provides the libraries in the format required for use with MSVC?
>
> Ahh, if you want to use the libraries from MSVC you need the dev (which you linked in your original email) and shared (http://ffmpeg.zeranoe.com/builds/win64/shared/ffmpeg-20121120-git-8b6aeb1-win64-shared.7z) builds.
>
> The shared build provides the dlls that you need to load. The dev build provides the include header files as well as stub .lib files that you can use to implicitly load the dll files. If you're using visual studio, you add the .lib files to the configuration and that will make all the functions in the dlls available to you.
>
> Matt
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user


From theateist at gmail.com Sun Nov 25 16:55:25 2012
From: theateist at gmail.com (theateist84)
Date: Sun, 25 Nov 2012 07:55:25 -0800 (PST)
Subject: [Libav-user] Missing codec parameters and FLAGS
Message-ID: <1353858925599-4656126.post@n4.nabble.com>

I develop in VS2010 using ffmpeg.
I tried some code example from here
<http://www.mail-archive.com/libav-user at mplayerhq.hu/msg05858.html> , but
it seems like VS cannot find some parameters, For example, flags
X264_PART_I8X8, X264_PART_I4X4, X264_PART_P8X8 and X264_PART_B8X8,
AVCodecContext.crf and etc. Where they are located?



--
View this message in context: http://libav-users.943685.n4.nabble.com/Missing-codec-parameters-and-FLAGS-tp4656126.html
Sent from the libav-users mailing list archive at Nabble.com.

From theateist at gmail.com Sun Nov 25 17:11:29 2012
From: theateist at gmail.com (theateist84)
Date: Sun, 25 Nov 2012 08:11:29 -0800 (PST)
Subject: [Libav-user] Missing codec parameters and FLAGS
Message-ID: <1353859889888-4656127.post@n4.nabble.com>

I'm developing on using VS2010 with ffmpeg and tried the code from here. But
VS says that it cannot find

1 -
CODEC_FLAG2_BPYRAMID+CODEC_FLAG2_MIXED_REFS+CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT+CODEC_FLAG2_FASTPSKIP;
// flags2=+bpyramid+mixed_refs+wpred+dct8x8+fastpskip
2- X264_PART_I8X8+X264_PART_I4X4+X264_PART_P8X8+X264_PART_B8X8
3 - avCodecContext.crf

Where are they located?



--
View this message in context: http://libav-users.943685.n4.nabble.com/Missing-codec-parameters-and-FLAGS-tp4656127.html
Sent from the libav-users mailing list archive at Nabble.com.

From moiein2000 at gmail.com Sun Nov 25 18:12:56 2012
From: moiein2000 at gmail.com (Matthew Einhorn)
Date: Sun, 25 Nov 2012 12:12:56 -0500
Subject: [Libav-user] installing for Windows
In-Reply-To: <4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
Message-ID: <CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>

On Sun, Nov 25, 2012 at 4:42 AM, "Ren? J.V. Bertin" <rjvbertin at gmail.com>wrote:

> Wasn't my original email but thanks :)
>
> There are no static libraries to link with from MSVC?
>

There aren't any in the zeranoe builds. And I don't know of any other site
that provides a static library for ffmpeg.

Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/25121ae2/attachment.html>

From alexcohn at netvision.net.il Sun Nov 25 18:46:55 2012
From: alexcohn at netvision.net.il (Alex Cohn)
Date: Sun, 25 Nov 2012 19:46:55 +0200
Subject: [Libav-user] installing for Windows
In-Reply-To: <CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
Message-ID: <CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>

On 25 Nov 2012 19:13, "Matthew Einhorn" <moiein2000 at gmail.com> wrote:
>
> On Sun, Nov 25, 2012 at 4:42 AM, "Ren? J.V. Bertin" <rjvbertin at gmail.com>
wrote:
>>
>> Wasn't my original email but thanks :)
>>
>> There are no static libraries to link with from MSVC?
>
> There aren't any in the zeranoe builds. And I don't know of any other
site that provides a static library for ffmpeg.

And there is a good reason for that: static linkage makes LGPL compliance
much harder.

Alex

> Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/2759b058/attachment.html>

From rjvbertin at gmail.com Sun Nov 25 18:50:02 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Sun, 25 Nov 2012 18:50:02 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
Message-ID: <3BBC9B96-F5EE-46CB-A910-874CADB7EA44@gmail.com>

Why is that?

On Nov 25, 2012, at 18:46, Alex Cohn wrote:

> > There aren't any in the zeranoe builds. And I don't know of any other site that provides a static library for ffmpeg.
>
> And there is a good reason for that: static linkage makes LGPL compliance much harder.
>
> Alex


From igorm at stats.com Sun Nov 25 14:56:10 2012
From: igorm at stats.com (Morduhaev, Igor (igorm@stats.com))
Date: Sun, 25 Nov 2012 13:56:10 +0000
Subject: [Libav-user] fps porblem when saving video in mp4 container
Message-ID: <F43D3702A40CBB40A1902FC1AD8999093045F2@SEAVER.stats.local>

Carl, firstly thank you for your answer.


1. You wrote "You could test with ffmpeg (the application) and let the libraries print the values you set in your code to find out which one is wrong"

What do you mean by this? Can give an example?

2. I ran the command below with ffmpeg I downloaded from http://ffmpeg.zeranoe.com/builds/

"ffmpeg.exe -I c:= C:\\000227_C1_GAME.avi -vcodec libx264 c:\test.mp4"

I wanted to see what params it uses so I could set them in code, but I couldn't map many params to params in code. For example, AVCodecContext missing params "rc", "mbtree", "crf", "ip_ratio". How to set them in code?



C:\Users\igorm\Desktop\ffmpeg-20121029-git-11d695d-win32-shared\bin>ffmpeg.exe -

i c:\000227_C1_GAME.avi -vcodec libx264 c:\test.mp4

ffmpeg version N-46146-g11d695d Copyright (c) 2000-2012 the FFmpeg developers

built on Oct 29 2012 18:08:23 with gcc 4.7.2 (GCC)

configuration: --disable-static --enable-shared --enable-gpl --enable-version3

--disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib

--enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-

amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut -

-enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger -

-enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc --

enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab

le-libxavs --enable-libxvid --enable-zlib

libavutil 52. 1.100 / 52. 1.100

libavcodec 54. 69.100 / 54. 69.100

libavformat 54. 35.100 / 54. 35.100

libavdevice 54. 3.100 / 54. 3.100

libavfilter 3. 20.109 / 3. 20.109

libswscale 2. 1.101 / 2. 1.101

libswresample 0. 16.100 / 0. 16.100

libpostproc 52. 1.100 / 52. 1.100

[mpeg4 @ 0076f420] Invalid and inefficient vfw-avi packed B frames detected

Input #0, avi, from 'c:\000227_C1_GAME.avi':

Duration: 00:05:59.48, start: 0.000000, bitrate: 1398 kb/s

Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv

420p, 808x610 [SAR 1:1 DAR 404:305], 25 tbr, 25 tbn, 25 tbc

File 'c:\test.mp4' already exists. Overwrite ? [y/N] y

using SAR=1/1

[libx264 @ 025f3c80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE

4.1 Cache64

[libx264 @ 025f3c80] profile High, level 3.1

[libx264 @ 025f3c80] 264 - core 128 r2216 198a7ea - H.264/MPEG-4 AVC codec - Cop

yleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deb

lock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 m

e_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chro

ma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 i

nterlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1

b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenec

ut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=

0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00

Output #0, mp4, to 'c:\test.mp4':

Metadata:

encoder : Lavf54.35.100

Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 808x610 [SAR 1:1

DAR 404:305], q=-1--1, 12800 tbn, 25 tbc

Stream mapping:

Stream #0:0 -> #0:0 (mpeg4 -> libx264)

Press [q] to stop, [?] for help

[mpeg4 @ 04ca3840] Invalid and inefficient vfw-avi packed B frames detected

frame= 57 fps=0.0 q=28.0 size= 71kB time=00:00:00.20 bitrate=2896.9kbits/

frame= 89 fps= 88 q=28.0 size= 202kB time=00:00:01.48 bitrate=1117.3kbits/





3.
I tested my code and it does compile! I compile it in VS2010 after configuring "VC++ Directories" in Property Pages of the project. What error do you get?
Here the whole code with includes I use

#include "stdafx.h"

#include "inttypes.h"

extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavutil/avutil.h"
#include <libswscale/swscale.h>
#include <libavutil/opt.h>
#include <libswscale/swscale.h>
#include <libavutil/imgutils.h>
}

#include <iostream>
using namespace sv;
using namespace std;

//TODO: use thread_count to increase converting speed

int main(int argc, char* argv[])
{

// ****************************************
// Here goes the code from my last post
// ****************************************

return 0;
}



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/4dc66e3e/attachment.html>

From alexcohn at netvision.net.il Sun Nov 25 18:58:59 2012
From: alexcohn at netvision.net.il (Alex Cohn)
Date: Sun, 25 Nov 2012 19:58:59 +0200
Subject: [Libav-user] installing for Windows
In-Reply-To: <3BBC9B96-F5EE-46CB-A910-874CADB7EA44@gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<3BBC9B96-F5EE-46CB-A910-874CADB7EA44@gmail.com>
Message-ID: <CADiKgoh_HezDYHvfFc1-xYiWPt7tuK9xfNw8qt0gq841NdVtYw@mail.gmail.com>

On 25 Nov 2012 19:50, Ren? J.V. Bertin <rjvbertin at gmail.com> wrote:
>
> Why is that?

LGPL requires that the end user could rebuild ffmpeg herself and use this
version instead of what you provided. Swapping DLL files is a piece of
cake; with static lib you must provide your code in obj format and
instructions for relinking it with upgraded ffmpeg libs.

Alex
>
> On Nov 25, 2012, at 18:46, Alex Cohn wrote:
>
> > > There aren't any in the zeranoe builds. And I don't know of any other
site that provides a static library for ffmpeg.
> >
> > And there is a good reason for that: static linkage makes LGPL
compliance much harder.
> >
> > Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121125/a831beb5/attachment.html>

From clopez at igalia.com Mon Nov 26 05:41:04 2012
From: clopez at igalia.com (Carlos Alberto Lopez Perez)
Date: Mon, 26 Nov 2012 05:41:04 +0100
Subject: [Libav-user] Clarifications about why "--use-nonfree" makes ffmpeg
undistributable given the LGPL
Message-ID: <50B2F2E0.5060809@igalia.com>

Hi


I have a doubt about the ffmpeg license that hopefully someone can clarify.


I understand the the ffmpeg license is LGPL.

And I also understand that the LGPL license allows dynamic linking with
any library/software (proprietary or not)

""
When a program is linked with a library, whether statically or using a
shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the entire
combination fits its criteria of freedom. The Lesser General Public
License permits more lax criteria for linking other code with the library.
""" https://www.gnu.org/licenses/lgpl-2.1.html


Then, why is that "--enable-nonfree" makes ffmpeg unredistributable?

I understand than meanwhile the linking of ffmpeg with the non-free
libraries is dynamic there shouldn't be any problems distributing it?


Where is the trap? What I'm missing?


Thanks!

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 900 bytes
Desc: OpenPGP digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121126/37f5909b/attachment.asc>

From theateist at gmail.com Mon Nov 26 08:50:49 2012
From: theateist at gmail.com (theateist84)
Date: Sun, 25 Nov 2012 23:50:49 -0800 (PST)
Subject: [Libav-user] fps porblem when saving video in mp4 container
In-Reply-To: <F43D3702A40CBB40A1902FC1AD8999093045F2@SEAVER.stats.local>
References: <F43D3702A40CBB40A1902FC1AD8999093041BD@SEAVER.stats.local>
<loom.20121122T225846-760@post.gmane.org>
<F43D3702A40CBB40A1902FC1AD8999093045F2@SEAVER.stats.local>
Message-ID: <1353916249558-4656134.post@n4.nabble.com>

Please, can someone help me?



--
View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-fps-porblem-when-saving-video-in-mp4-container-tp4656102p4656134.html
Sent from the libav-users mailing list archive at Nabble.com.

From cehoyos at ag.or.at Mon Nov 26 12:13:43 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 11:13:43 +0000 (UTC)
Subject: [Libav-user] Clarifications about why "--use-nonfree" makes
ffmpeg undistributable given the LGPL
References: <50B2F2E0.5060809@igalia.com>
Message-ID: <loom.20121126T120921-362@post.gmane.org>

Carlos Alberto Lopez Perez <clopez at ...> writes:

> Then, why is that "--enable-nonfree" makes ffmpeg
> unredistributable?

You mean: "why is libfaac making ffmpeg unreproducible
even without x264?"
Your question is valid, the original idea was to make it
more likely to get improvements for the native encoder.
(Instead, even more proprietary aac encoders are
supported now.)

But note that the native encoder is not as bad for stereo
output and the use case of needing aac encoding without
h264 encoding is at least not extremely common.

> I understand than meanwhile the linking of ffmpeg
> with the non-free libraries is dynamic there
> shouldn't be any problems distributing it?

dynamic vs static linking is totally unrelated to above
question afaict.

Carl Eugen


From rjvbertin at gmail.com Mon Nov 26 13:26:07 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Mon, 26 Nov 2012 13:26:07 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
Message-ID: <EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>

And since we're talking about MSWin installs, I've noticed that the win32 and win64 static ffmpeg version are way less verbose than their *n*x counterparts. That suits me as I call ffmpeg (and ffprobe) via popen to do some work for me from inside a plugin for GUI applications. I'm still getting a cmd window that opens, which seems logical but somewhat untidy. Is there a way to keep this from happening? An ffmpeg that doesn't use the Console subsystem shouldn't do this, right?

R.

From cehoyos at ag.or.at Mon Nov 26 14:00:21 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 13:00:21 +0000 (UTC)
Subject: [Libav-user] installing for Windows
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
Message-ID: <loom.20121126T135129-580@post.gmane.org>

Ren? J.V. Bertin <rjvbertin at ...> writes:

> I've noticed that the win32 and win64 static ffmpeg version
> are way less verbose than their *n*x counterparts.

(Command line and complete, uncut console output missing.)
Could you elaborate? This seems unlikely.

> An ffmpeg that doesn't use the Console subsystem shouldn't do this, right?

ffmpeg can only be used from the command line...

Carl Eugen


From clopez at igalia.com Mon Nov 26 14:30:10 2012
From: clopez at igalia.com (Carlos Alberto Lopez Perez)
Date: Mon, 26 Nov 2012 14:30:10 +0100
Subject: [Libav-user] Clarifications about why "--use-nonfree" makes
ffmpeg undistributable given the LGPL
In-Reply-To: <loom.20121126T120921-362@post.gmane.org>
References: <50B2F2E0.5060809@igalia.com>
<loom.20121126T120921-362@post.gmane.org>
Message-ID: <50B36EE2.3090609@igalia.com>

On 26/11/12 12:13, Carl Eugen Hoyos wrote:
> Carlos Alberto Lopez Perez <clopez at ...> writes:
>
>> Then, why is that "--enable-nonfree" makes ffmpeg
>> unredistributable?
>
> You mean: "why is libfaac making ffmpeg unreproducible
> even without x264?"

Not only libfaac:

$ grep "die_license_disabled nonfree" ffmpeg/configure
die_license_disabled nonfree libaacplus
die_license_disabled nonfree libfaac

The Libav fork also disabled some free but gpl-incompatible libraries:
$ grep "die_license_disabled nonfree" libav/configure
die_license_disabled nonfree libfaac
die_license_disabled nonfree libfdk_aac
die_license_disabled nonfree openssl

> Your question is valid, the original idea was to make it
> more likely to get improvements for the native encoder.
> (Instead, even more proprietary aac encoders are
> supported now.)
>
> But note that the native encoder is not as bad for stereo
> output and the use case of needing aac encoding without
> h264 encoding is at least not extremely common.
>

I understand, but I'm not asking about the native aac encoder vs others.

I wish to know why you think that the fact that linking ffmpeg with any
proprietary library (or any free not-gpl compatible library) makes it
unredistributable when the LGPL clearly says that this is allowed.

./configure --enable-nonfree
[...]
License: nonfree and unredistributable

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 900 bytes
Desc: OpenPGP digital signature
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121126/6b6d34d9/attachment.asc>

From nkipe at tatapowersed.com Mon Nov 26 14:44:09 2012
From: nkipe at tatapowersed.com (Navin)
Date: Mon, 26 Nov 2012 19:14:09 +0530
Subject: [Libav-user] installing for Windows
In-Reply-To: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
Message-ID: <50B37229.3060402@tatapowersed.com>

@Jim: I've used Dev, and had problems with sws_scale not working. No
idea why, but if you download the "shared" prebuilt libraries from
Zeranoe, you can use ffmpeg's DLL files with MSVC like so:

#include <Windows.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>//just as an example

int main()
{
const char * avfDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avformat-54.dll";//you
can place this DLL in the same directory as your exe
const char * avcDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avcodec-54.dll";.//just
as an example
HINSTANCE hinstLib_avformat = LoadLibrary(avfDLLpointer);
HINSTANCE hinstLib_avcodec = LoadLibrary(avcDLLpointer);//just as
an example
__av_dump_format av_dump_format_proc =
(__av_dump_format) GetProcAddress(hinstLib_avformat,
"av_dump_format");

av_register_all_proc();// Register all formats and codecs
}

Navin

On 11/25/2012 7:17 AM, Jim Morgenstern wrote:
>
> A stupid, na?ve question that has probably been answered before but I
> would appreciate a reply:
>
> I downloaded FFmpeg git-8b6aeb1 64-bit Dev (Latest)
> <http://ffmpeg.zeranoe.com/builds/win64/dev/ffmpeg-20121120-git-8b6aeb1-win64-dev.7z>
> from Zeranoe
>
> I want to use FFmpeg on Win 7. Is there an install process ?
>
> should I rename the dll s ?? [e.g. I have: libavcodec.dll.a ]
>
> have I downloaded the wrong file ??
>
> thanks
>
> **
>
> *Jim Morgenstern*
>
> Image Mining LLC
>
> 248-252-2626
>
>
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121126/d9142366/attachment.html>

From rjvbertin at gmail.com Mon Nov 26 14:43:23 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Mon, 26 Nov 2012 14:43:23 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <loom.20121126T135129-580@post.gmane.org>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
<loom.20121126T135129-580@post.gmane.org>
Message-ID: <9A30EFB5-AD8A-48F4-A011-5C595B95C09D@gmail.com>

On Nov 26, 2012, at 14:00, Carl Eugen Hoyos wrote:

> Ren? J.V. Bertin <rjvbertin at ...> writes:
>
>> I've noticed that the win32 and win64 static ffmpeg version
>> are way less verbose than their *n*x counterparts.
>
> (Command line and complete, uncut console output missing.)
> Could you elaborate? This seems unlikely.
>

OK, I'm confused, I cannot reproduce it anymore. However, launching ffmpeg via popen( "ffmpeg ...", "wb" ) indeed doesn't show any output on the cmd window that's opened.

>> An ffmpeg that doesn't use the Console subsystem shouldn't do this, right?
>
> ffmpeg can only be used from the command line...

Erm, I beg to differ. That may be the most usual way to launch ffmpeg, but just how many applications are there that launch ffmpeg without asking the user to type things onto a command line? On MS Windows, Mac OS X and Linux, GUI applications receive their arguments (files dragged onto their icon) via argc/argv (or can at least obtain them in a comparable manner). Called that way, they run in a limited version of sh or similar, with standard input and outputs connected wherever the system decides.
I have a very basic GUI movie player without a menu or anything, and it receives its arguments exactly that way. If it calls functions in one of my DLLs that contain fprintf(stderr) statements, their output just disappears (on MSWin), rather than causing a CMD window to be opened.

R.

From nkipe at tatapowersed.com Mon Nov 26 14:57:11 2012
From: nkipe at tatapowersed.com (Navin)
Date: Mon, 26 Nov 2012 19:27:11 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <loom.20121121T235434-455@post.gmane.org>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
<loom.20121121T124512-813@post.gmane.org>
<436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>
<loom.20121121T235434-455@post.gmane.org>
Message-ID: <50B37537.3080805@tatapowersed.com>

Got the solution to my problem quite a while back. Posting it now as I
have time. Hope it helps someone else. Like Carl said, it'd be best for
a person to build the ffmpeg libraries by themselves.
I was using the 32bit dev builds from Zeranoe, and linking to the .lib
files caused the compiler to ask for the corresponding DLL files too,
but DLL's aren't part of the dev builds, so I had to take the DLL's of
the shared build.
Using it this way, my ffmpeg code worked, but used to strangely crash at
sws_scale.

But using purely the shared build's DLL's everything worked fine. Some
example code for the RGB values:
//This example is from the updated tutorial 01:
https://github.com/mpenkov/ffmpeg-tutorial
#include <Windows.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>//just as an example

int main()
{
const char * avfDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avformat-54.dll";//you
can place this DLL in the same directory as your exe
const char * avcDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avcodec-54.dll";.//just
as an example
HINSTANCE hinstLib_avformat = LoadLibrary(avfDLLpointer);
HINSTANCE hinstLib_avcodec = LoadLibrary(avcDLLpointer);//just as
an example
__av_dump_format av_dump_format_proc =
(__av_dump_format) GetProcAddress(hinstLib_avformat,
"av_dump_format");

...blah blah...
sws_ctx = sws_getContext_proc(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB24,
SWS_BILINEAR, NULL, NULL, NULL );
avpicture_fill_proc((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
...blah blah...
sws_scale_proc(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data,
pFrameRGB->linesize );
for(int yy = 0; yy < pCodecCtx->height; ++yy)
{
for(int xx = 0; xx < pCodecCtx->width; ++xx)
{
int p = xx * 3 + yy * pFrameRGB->linesize[0];
int r = pFrameRGB->data[0][p];
int g = pFrameRGB->data[0][p+1];
int b = pFrameRGB->data[0][p+2];
}
}
...blah blah...
}


Navin

On 11/22/2012 4:32 AM, Carl Eugen Hoyos wrote:
> Malik Ciss?<mc at ...> writes:
>
>> Carl Eugen ... writes:
>>> ...compile FFmpeg with msvc.
>> This sounds interesting. Where can I find msvc sample project?
> Please see the MSVC section on the platform page
> of the fine documentation to find information on
> how to compile FFmpeg with MSVC.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>

From hannes.wuerfel at student.hpi.uni-potsdam.de Mon Nov 26 14:59:29 2012
From: hannes.wuerfel at student.hpi.uni-potsdam.de (Hannes Wuerfel)
Date: Mon, 26 Nov 2012 14:59:29 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
Message-ID: <50B375C1.1070500@student.hpi.uni-potsdam.de>

Am 26.11.2012 13:26, schrieb "Ren? J.V. Bertin":
> I'm still getting a cmd window that opens, which seems logical but somewhat untidy. Is there a way to keep this from happening? An ffmpeg that doesn't use the Console subsystem shouldn't do this, right?
Hi Ren?,
I don't think that ffmpeg needs to hide itself but of course there are
several ways of hinding consoles and windows.
Is this something you are searching for?
http://www.cplusplus.com/forum/beginner/12001/

From nkipe at tatapowersed.com Mon Nov 26 15:10:53 2012
From: nkipe at tatapowersed.com (Navin)
Date: Mon, 26 Nov 2012 19:40:53 +0530
Subject: [Libav-user] installing for Windows
In-Reply-To: <50B375C1.1070500@student.hpi.uni-potsdam.de>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
<50B375C1.1070500@student.hpi.uni-potsdam.de>
Message-ID: <50B3786D.3030900@tatapowersed.com>

@Jim:
Sorry, I missed out an important typedef. The corrected code is:
#include <Windows.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>//just as an example

typedef void (__cdecl *__av_register_all)(void);
typedef void (__cdecl *__av_dump_format)(AVFormatContext *ic, int index,
const char *url, int is_output);

int main()
{
const char * avfDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avformat-54.dll";//you
can place this DLL in the same directory as your exe
const char * avcDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avcodec-54.dll";.//just
as an example
HINSTANCE hinstLib_avformat = LoadLibrary(avfDLLpointer);
HINSTANCE hinstLib_avcodec = LoadLibrary(avcDLLpointer);//just as
an example
__av_register_all av_register_all_proc =
(__av_register_all) GetProcAddress(hinstLib_avformat,
"av_register_all");
__av_dump_format av_dump_format_proc =
(__av_dump_format) GetProcAddress(hinstLib_avformat,
"av_dump_format");

av_register_all_proc();// Register all formats and codecs
av_dump_format_proc(pFormatCtx, 0, filename, 0);
}

Navin


On 11/26/2012 7:29 PM, Hannes Wuerfel wrote:
> Am 26.11.2012 13:26, schrieb "Ren? J.V. Bertin":
>> I'm still getting a cmd window that opens, which seems logical but
>> somewhat untidy. Is there a way to keep this from happening? An
>> ffmpeg that doesn't use the Console subsystem shouldn't do this, right?
> Hi Ren?,
> I don't think that ffmpeg needs to hide itself but of course there are
> several ways of hinding consoles and windows.
> Is this something you are searching for?
> http://www.cplusplus.com/forum/beginner/12001/
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>

From nkipe at tatapowersed.com Mon Nov 26 15:19:23 2012
From: nkipe at tatapowersed.com (Navin)
Date: Mon, 26 Nov 2012 19:49:23 +0530
Subject: [Libav-user] get RGB values from ffmpeg frame
In-Reply-To: <loom.20121121T235434-455@post.gmane.org>
References: <50AAF761.30704@tatapowersed.com>
<loom.20121120T110352-503@post.gmane.org>
<50AB64CA.8000706@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31D8@VMBX116.ihostexchange.net>
<50AB6BC1.30308@tatapowersed.com>
<436399CBF77B614A9032D97E1D6E8AF16335EF31EF@VMBX116.ihostexchange.net>
<50AC67F1.4000402@tatapowersed.com>
<loom.20121121T124512-813@post.gmane.org>
<436399CBF77B614A9032D97E1D6E8AF16335EF334C@VMBX116.ihostexchange.net>
<loom.20121121T235434-455@post.gmane.org>
Message-ID: <50B37A6B.8030604@tatapowersed.com>

Sorry, the correct code (with the necessary typedefs) is as such:

#include <Windows.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libswscale/swscale.h>

typedef SwsContext* (__cdecl *__sws_getContext)(int srcW, int srcH, enum
AVPixelFormat srcFormat, int dstW, int dstH, enum AVPixelFormat
dstFormat, int flags, SwsFilter *srcFilter, SwsFilter *dstFilter, const
double *param);
typedef int (__cdecl *__sws_scale)(struct SwsContext *c, const uint8_t
*const srcSlice[], const int srcStride[], int srcSliceY, int srcSliceH,
uint8_t *const dst[], const int dstStride[]);
typedef int (__cdecl *__avpicture_fill)(AVPicture *picture, uint8_t
*ptr, enum AVPixelFormat pix_fmt, int width, int height);

int main()
{
const char * swsDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\swscale-2.dll";
const char * avcDLLpointer =
"F:\\Nav\\VisualStudio\\ffmpegTestProgram\\ffmpegTestProgram\\avcodec-54.dll";
HINSTANCE hinstLib_swscale = LoadLibrary(swsDLLpointer);
HINSTANCE hinstLib_avcodec = LoadLibrary(avcDLLpointer);

__avpicture_fill avpicture_fill_proc =
(__avpicture_fill) GetProcAddress(hinstLib_avcodec,
"avpicture_fill");
__sws_getContext sws_getContext_proc =
(__sws_getContext) GetProcAddress(hinstLib_swscale,
"sws_getContext");
__sws_scale sws_scale_proc =
(__sws_scale) GetProcAddress(hinstLib_swscale, "sws_scale");

...blah blah...
sws_ctx = sws_getContext_proc(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB24,
SWS_BILINEAR, NULL, NULL, NULL );
avpicture_fill_proc((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
...blah blah...
sws_scale_proc(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data,
pFrameRGB->linesize );
for(int yy = 0; yy < pCodecCtx->height; ++yy)
{
for(int xx = 0; xx < pCodecCtx->width; ++xx)
{
int p = xx * 3 + yy * pFrameRGB->linesize[0];
int r = pFrameRGB->data[0][p];
int g = pFrameRGB->data[0][p+1];
int b = pFrameRGB->data[0][p+2];
}
}
...blah blah...
}

It's basically that simple to get RGB values.

Navin

On 11/22/2012 4:32 AM, Carl Eugen Hoyos wrote:
> Malik Ciss?<mc at ...> writes:
>
>> Carl Eugen ... writes:
>>> ...compile FFmpeg with msvc.
>> This sounds interesting. Where can I find msvc sample project?
> Please see the MSVC section on the platform page
> of the fine documentation to find information on
> how to compile FFmpeg with MSVC.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>

From philip at turmel.org Mon Nov 26 15:22:50 2012
From: philip at turmel.org (Phil Turmel)
Date: Mon, 26 Nov 2012 09:22:50 -0500
Subject: [Libav-user] Clarifications about why "--use-nonfree" makes
ffmpeg undistributable given the LGPL
In-Reply-To: <50B36EE2.3090609@igalia.com>
References: <50B2F2E0.5060809@igalia.com>
<loom.20121126T120921-362@post.gmane.org>
<50B36EE2.3090609@igalia.com>
Message-ID: <50B37B3A.3010506@turmel.org>

On 11/26/2012 08:30 AM, Carlos Alberto Lopez Perez wrote:

> I wish to know why you think that the fact that linking ffmpeg with any
> proprietary library (or any free not-gpl compatible library) makes it
> unredistributable when the LGPL clearly says that this is allowed.

The LGPL does *not* automatically grant distribution rights to a linked
program. There are a number of distribution conditions, with a number
of variations, all intended to make sure the end-user can practically
and legally replace the distributed library with their own modified
version, and still have a working application.

> ./configure --enable-nonfree
> [...]
> License: nonfree and unredistributable

FFmpeg can make use of resources that aren't themselves LGPL, or are
otherwise encumbered (e.g. patents). When doing so, the compiled copy
of FFmpeg is entangled with conditions from those external resources.
The combinaton of FFmpeg's conditions with the other conditions cannot
be satisfied, so the result is undistributable.

In other words, this is a combination that an end-user must compile for
themselves. This scenario is one of the reasons the LGPL was written to
force applications to permit relinking or its equivalent.

I am not a lawyer, though. You might want to consult one if you don't
understand the above.

Regards,

Phil

From cehoyos at ag.or.at Mon Nov 26 16:10:39 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 15:10:39 +0000 (UTC)
Subject: [Libav-user] installing for Windows
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
<loom.20121126T135129-580@post.gmane.org>
<9A30EFB5-AD8A-48F4-A011-5C595B95C09D@gmail.com>
Message-ID: <loom.20121126T154520-861@post.gmane.org>

Ren? J.V. Bertin <rjvbertin at ...> writes:

> >> An ffmpeg that doesn't use the Console subsystem shouldn't
> >> do this, right?
> >
> > ffmpeg can only be used from the command line...
>
> Erm, I beg to differ. That may be the most usual way to
> launch ffmpeg, but just how many applications are there
> that launch ffmpeg without asking the user to type things
> onto a command line? On MS Windows, Mac OS X and
> Linux, GUI applications receive their arguments (files
> dragged onto their icon) via argc/argv (or can at
> least obtain them in a comparable manner).

> Called that way, they run in a limited version of sh or similar,
> with standard input and outputs connected wherever the system decides.

The point I was trying to make was that ffmpeg is made to run
inside a shell and cannot know that you want to run it "in a limited
version of sh".

> I have a very basic GUI movie player without a menu or anything,
> and it receives its arguments exactly that way. If it calls
> functions in one of my DLLs that contain fprintf(stderr)
> statements, their output just disappears (on MSWin), rather
> than causing a CMD window to be opened.

I may misunderstand:
Do you mean your application does it differently than FFmpeg and
you suggest to do it in FFmpeg like your application or do you
mean your application uses libavcodec (etc.) and you have
problems with its output? In this case please note that there are
no direct calls to printf in libavcodec, you can set av_log
callbacks to redirect output where you want it.

Carl Eugen


From cehoyos at ag.or.at Mon Nov 26 17:04:22 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 16:04:22 +0000 (UTC)
Subject: [Libav-user] Clarifications about why
References: <50B2F2E0.5060809@igalia.com>
<loom.20121126T120921-362@post.gmane.org>
<50B36EE2.3090609@igalia.com>
Message-ID: <loom.20121126T170201-785@post.gmane.org>

Carlos Alberto Lopez Perez <clopez at ...> writes:

> I wish to know why you think that the fact that linking ffmpeg with any
> proprietary library (or any free not-gpl compatible library) makes it
> unredistributable when the LGPL clearly says that this is allowed.

Please read my email again, I did not say that.

Carl Eugen


From cehoyos at ag.or.at Mon Nov 26 17:06:22 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 16:06:22 +0000 (UTC)
Subject: [Libav-user] Clarifications about why "--use-nonfree" makes
ffmpeg undistributable given the LGPL
References: <50B2F2E0.5060809@igalia.com>
<loom.20121126T120921-362@post.gmane.org>
<50B36EE2.3090609@igalia.com>
Message-ID: <loom.20121126T170534-969@post.gmane.org>

Carlos Alberto Lopez Perez <clopez at ...> writes:

> I understand, but I'm not asking about the native
> aac encoder vs others.

I should add that this is the only real-world use case
or do I miss something?

Carl Eugen


From igorm at stats.com Mon Nov 26 15:21:38 2012
From: igorm at stats.com (Morduhaev, Igor (igorm@stats.com))
Date: Mon, 26 Nov 2012 14:21:38 +0000
Subject: [Libav-user] Could somone help me?
Message-ID: <F43D3702A40CBB40A1902FC1AD899909304F84@SEAVER.stats.local>

I posted a question, but still didn't get any answer to it? Could someone please help? What I'm doing wrong?

http://libav-users.943685.n4.nabble.com/Libav-user-fps-porblem-when-saving-video-in-mp4-container-td4656102.html

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121126/a13fd9ee/attachment.html>

From aykut_avci at hotmail.com Mon Nov 26 17:22:05 2012
From: aykut_avci at hotmail.com (avpicture)
Date: Mon, 26 Nov 2012 08:22:05 -0800 (PST)
Subject: [Libav-user] Encoding multiples images with H264
Message-ID: <1353946925779-4656150.post@n4.nabble.com>

Hello all,

The code below is to encode couple of pictures using H264.
Everything seems fine but I don't see any encoded data in "encoded_buf".
Could you please comment on it?

So the ret variables are to check the return values.
ret1 = 5760000
ret2 = 2160000
ret3 = 900
ret4 = 0

PS: mfSetContextParameters function is to set the context parameters. I will
place the code at the end of the page.

Thank you.

---------------------------
AVCodec *codec;
int encoded_buf_size;
uint8_t *encoded_buf, *yuvoutbuffer;
AVFrame* YUVframe;
AVFrame *RGBframe;
int ret1=0, ret2=0, ret3=0, ret4=0, ret5=0;

YUVframe = avcodec_alloc_frame();
RGBframe = avcodec_alloc_frame();

c = mfSetContextParameters(c, aWidth, aHeight, mvGOPsize);

codec = avcodec_find_encoder(CODEC_ID_H264);
if (!codec) {gpLogDebug("Video codec not found."); return;}
if (avcodec_open(c, codec) < 0) {gpLogDebug("Could not open codec.");
return;}

int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
yuvoutbuffer=(uint8_t *) av_malloc(nbytes);
YUVframe->pts = 0;

ret1=avpicture_fill((AVPicture*)RGBframe, (uint8_t *) bits[0],
PIX_FMT_RGB32, c->width, c->height);
ret2=avpicture_fill((AVPicture*)YUVframe, yuvoutbuffer, PIX_FMT_YUV420P,
c->width, c->height);

struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB32,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);


//perform the conversion
ret3=sws_scale(fooContext, RGBframe->data, RGBframe->linesize, 0,
c->height, YUVframe->data, YUVframe->linesize);
// Here is where I try to convert to YUV
encoded_buf_size = 200000;
encoded_buf = (uint8_t *) malloc(encoded_buf_size);
ret4 = avcodec_encode_video(c, encoded_buf, encoded_buf_size, YUVframe);

av_free(encoded_buf);
av_free(RGBframe);
av_free(YUVframe);
av_free(yuvoutbuffer);

-----------------------------------------

c= avcodec_alloc_context();
c->bit_rate = 512000;
c->width = aWidth;
c->height = aHeight;
c->time_base.num = 1;
c->time_base.den = 30;
c->gop_size = mvGOPsize;
c->max_b_frames=0;
c->bit_rate_tolerance= 40000;
c->pix_fmt = PIX_FMT_YUV420P;
c->me_range = 16;
c->max_qdiff = 1;
c->qmin = 1;
c->qmax = 30;
c->qcompress = 0.6;
c->keyint_min = 120;
c->refs = 3;
c->trellis =1;
c->rtp_payload_size = 500;
c->level=13; //Level 1.3
c->me_method=7;
c->qblur=0.5;
c->profile=66;//Baseline
c->thread_count=6;




--
View this message in context: http://libav-users.943685.n4.nabble.com/Encoding-multiples-images-with-H264-tp4656150.html
Sent from the libav-users mailing list archive at Nabble.com.

From aykut_avci at hotmail.com Mon Nov 26 15:52:35 2012
From: aykut_avci at hotmail.com (avpicture)
Date: Mon, 26 Nov 2012 06:52:35 -0800 (PST)
Subject: [Libav-user] Need a comment
Message-ID: <1353941555426-4656146.post@n4.nabble.com>

Hello,

I am trying to encode couple of frames in the memory and I have
written/copied code below for this purpose.
I can reach the memory address of the first picture by bits[0]. Later on I
will add the other frames into the code. This code below is just to see if
everything is running as expected.

When "ret3=sws_scale..." line is running (performing the conversion from
RGB32 to YUV420), an error appears on the screen saying that "bad dst image
pointers". I have been debugging and checking outpic parameter, which is
the dst in the parameter list of the scale, but I couldn't find why I am
getting this error.

Could you please give me a hint for this error?

PS: mfSetContextParameters function is to set all the values for the context
variable. I will copy it at the end of the page.

Thank you very much,

AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf, *picture_buf;
int ret1=0, ret2=0, ret3=0;

outbuf_size=200000;
outbuf = (uint8_t *)malloc(outbuf_size);

picture= avcodec_alloc_frame();

c=mfSetContextParameters(c, aWidth, aHeight);

codec = avcodec_find_encoder(CODEC_ID_H264);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}

/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}

size = c->width * c->height;

AVFrame* outpic = avcodec_alloc_frame();
outpic -> pts = 0;
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
outbuffer=NULL;


ret1=avpicture_fill((AVPicture*)picture, (uint8_t *) bits[0],
PIX_FMT_RGB32, c->width, c->height);
ret2=avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P,
c->width, c->height);

struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB32,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL,
NULL);


//perform the conversion
ret3=sws_scale(fooContext, picture->data, picture->linesize, 0,
c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV

out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);


-----------------------

c= avcodec_alloc_context();
c->bit_rate = 512000;
c->width = aWidth;
c->height = aHeight;
c->time_base.num = 1;
c->time_base.den = 30;
c->gop_size = 200;
c->max_b_frames=0;
c->bit_rate_tolerance= 40000;
c->pix_fmt = PIX_FMT_YUV420P;
c->me_range = 16;
c->max_qdiff = 1;
c->qmin = 1;
c->qmax = 30;
c->qcompress = 0.6;
c->me_range = 16;
c->keyint_min = 120;
c->refs = 3;
c->trellis =1;
c->rtp_payload_size = 500;
c->level=13; //Level 1.3
c->me_method=7;
c->qblur=0.5;
c->profile=66;//Baseline
c->thread_count=6;





--
View this message in context: http://libav-users.943685.n4.nabble.com/Need-a-comment-tp4656146.html
Sent from the libav-users mailing list archive at Nabble.com.

From dhenry at movenetworks.com Tue Nov 27 00:07:46 2012
From: dhenry at movenetworks.com (David Henry)
Date: Mon, 26 Nov 2012 16:07:46 -0700
Subject: [Libav-user] E-AC-3 7.1 encoding?
Message-ID: <62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894B@slcexchsrv04.movenetworks.com>

I'm wondering about encoding E-AC-3 7.1 surround using CODEC_ID_EAC3, but couldn't find any sample code.
If supported, when was this feature added? (i'm using libavcodec 53. 42. 4)
I'm thinking that I select:
CODEC_ID_EAC3 as the codec,
AVCodecContext.channels = 8
AVCodecContext.sample_fmt = AV_SAMPLE_FMT_FLT
AVCodecContext.channel_layout = AV_CH_LAYOUT_7POINT1
and then just encode as I would any other audio using avcodec_encode_audio().
Are there any extra steps or gotchas when trying to encode 7.1 surround?

From cehoyos at ag.or.at Tue Nov 27 00:43:18 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 23:43:18 +0000 (UTC)
Subject: [Libav-user] E-AC-3 7.1 encoding?
References: <62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894B@slcexchsrv04.movenetworks.com>
Message-ID: <loom.20121127T004245-928@post.gmane.org>

David Henry <dhenry at ...> writes:

> I'm wondering about encoding E-AC-3 7.1 surround using CODEC_ID_EAC3

The eac3 encoder currently does not support 7.1 encoding.

Carl Eugen


From cehoyos at ag.or.at Tue Nov 27 00:46:05 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 23:46:05 +0000 (UTC)
Subject: [Libav-user] Could somone help me?
References: <F43D3702A40CBB40A1902FC1AD899909304F84@SEAVER.stats.local>
Message-ID: <loom.20121127T004416-677@post.gmane.org>

Morduhaev, Igor (igorm at ... <igorm at ...> writes:

> I posted a question, but still didn't get any answer to it?

You still did not post anything that would compile afaict.
(Please understand that this is no guarantee that you will
get a satisfying answer once you post a question that can
be understood in any way. I am just trying to explain why
at least I have no chance whatsoever in answering.)

Carl Eugen


From dhenry at movenetworks.com Tue Nov 27 00:51:06 2012
From: dhenry at movenetworks.com (David Henry)
Date: Mon, 26 Nov 2012 16:51:06 -0700
Subject: [Libav-user] E-AC-3 7.1 encoding?
In-Reply-To: <loom.20121127T004245-928@post.gmane.org>
References: <62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894B@slcexchsrv04.movenetworks.com>,
<loom.20121127T004245-928@post.gmane.org>
Message-ID: <62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894C@slcexchsrv04.movenetworks.com>

Does anyone know of public plans (include url please) or time frames to support this feature? Thanks.

>> I'm wondering about encoding E-AC-3 7.1 surround using CODEC_ID_EAC3

>The eac3 encoder currently does not support 7.1 encoding.
>
>Carl Eugen

From cehoyos at ag.or.at Tue Nov 27 00:58:24 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Mon, 26 Nov 2012 23:58:24 +0000 (UTC)
Subject: [Libav-user] E-AC-3 7.1 encoding?
References: <62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894B@slcexchsrv04.movenetworks.com>,
<loom.20121127T004245-928@post.gmane.org>
<62E9C7E658EC5A4BB68AB867D3A22FB501C39D43894C@slcexchsrv04.movenetworks.com>
Message-ID: <loom.20121127T005648-342@post.gmane.org>

David Henry <dhenry at ...> writes:

> Does anyone know of public plans (include url please)
> or time frames to support this feature?

You may misunderstand how open-source development /
FFmpeg development works.
(Even if there are plans that I don't know about
there would be no "time frames".)

Please do not top-post here, Carl Eugen


From rjvbertin at gmail.com Tue Nov 27 03:06:08 2012
From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=)
Date: Tue, 27 Nov 2012 03:06:08 +0100
Subject: [Libav-user] installing for Windows
In-Reply-To: <loom.20121126T154520-861@post.gmane.org>
References: <04c101cdcaae$c52af720$4f80e560$@ImageMining.net>
<CALCauYdREO6jJ=tz1Df=7LX-ceD3xhcbxD-ivpuUiYm1huXoSw@mail.gmail.com>
<35FC91C9-DFA4-4D0B-B034-BD387EE1D0CF@gmail.com>
<CALCauYeDLOG08SLwtcASzXty2NWJn04uUDu_BZxkeSeo7fNrsA@mail.gmail.com>
<4459A0B0-B688-4A8E-9800-04A5B743D392@gmail.com>
<CALCauYePnh2kv3JKTD0JOVddoqZd5R1=w5dX+7xGJWR_0Ey_yg@mail.gmail.com>
<CADiKgoiF27OYf4H32Ch9fPveMT2Y4721ddoMi_7VhvwR0cHwPQ@mail.gmail.com>
<59EE8E9C-1EC5-470B-9611-F6F8D49C437E@gmail.com>
<EBF4C659-196B-41EB-BFE9-7327DE4A2A00@gmail.com>
<loom.20121126T135129-580@post.gmane.org>
<9A30EFB5-AD8A-48F4-A011-5C595B95C09D@gmail.com>
<loom.20121126T154520-861@post.gmane.org>
Message-ID: <E90C27B9-531E-4D3F-8C76-C089B409AFE8@gmail.com>

Well, turns out ffmpeg is REALLY not responsible for the opening of console windows when called through _popen(), regardless of how it's built.

It's _popen() itself that opens the console window if the calling process isn't attached to a console. I'll presume this is what allows it to work even in a Windows app.

Seems I have the option to write my own _popen() variant that opens the window invisibly, but that's probably going to require more time investment than I'm interested to make...

R.

From stonerain at lycos.co.kr Tue Nov 27 11:29:17 2012
From: stonerain at lycos.co.kr (=?UTF-8?B?7J207ISd7Jqw?=)
Date: Tue, 27 Nov 2012 02:29:17 -0800 (PST)
Subject: [Libav-user] Can i get input devices name using libav in window?
Message-ID: <1354012157830-4656159.post@n4.nabble.com>

Hello
I'm beginer in ffmpeg

I have camera and microphone connected my computer.
and i know the devices name.
but I want to know how can get devices name using libav functions...

Thank you. everyone.








--
View this message in context: http://libav-users.943685.n4.nabble.com/Can-i-get-input-devices-name-using-libav-in-window-tp4656159.html
Sent from the libav-users mailing list archive at Nabble.com.

From wagner.patriota at gmail.com Tue Nov 27 20:26:47 2012
From: wagner.patriota at gmail.com (Wagner Patriota)
Date: Tue, 27 Nov 2012 17:26:47 -0200
Subject: [Libav-user] open images (jpg/png/etc) with ffmpeg and getting an
AVPicture
In-Reply-To: <CAKuDJW0WE6toCpoJozgkep1P1UeTFWe3uNdqf3G2macmAdHyQQ@mail.gmail.com>
References: <CAKuDJW0WE6toCpoJozgkep1P1UeTFWe3uNdqf3G2macmAdHyQQ@mail.gmail.com>
Message-ID: <CAKuDJW03xt3Wn=k8NQRVtNFPo+R4CxLfzLWERnfWjE_mES93kA@mail.gmail.com>

Could somebody give me some direction?

I am trying to use all the latest ffmpeg commands...
should I open it the same way I open a video? or there's some shortcut?

thanks
Wagner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121127/fc159a49/attachment.html>

From stefasab at gmail.com Tue Nov 27 23:13:27 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Tue, 27 Nov 2012 23:13:27 +0100
Subject: [Libav-user] open images (jpg/png/etc) with ffmpeg and getting
an AVPicture
In-Reply-To: <CAKuDJW03xt3Wn=k8NQRVtNFPo+R4CxLfzLWERnfWjE_mES93kA@mail.gmail.com>
References: <CAKuDJW0WE6toCpoJozgkep1P1UeTFWe3uNdqf3G2macmAdHyQQ@mail.gmail.com>
<CAKuDJW03xt3Wn=k8NQRVtNFPo+R4CxLfzLWERnfWjE_mES93kA@mail.gmail.com>
Message-ID: <20121127221327.GI23348@arborea>

On date Tuesday 2012-11-27 17:26:47 -0200, Wagner Patriota encoded:
> Could somebody give me some direction?
>
> I am trying to use all the latest ffmpeg commands...
> should I open it the same way I open a video? or there's some shortcut?

Check how I did it in libavfilter/lavfutils.c:ff_load_image()

I don't know if we should provide some generic utilities somewhere
(maybe libavformat?).

From wagner.patriota at gmail.com Tue Nov 27 23:31:17 2012
From: wagner.patriota at gmail.com (Wagner Patriota)
Date: Tue, 27 Nov 2012 20:31:17 -0200
Subject: [Libav-user] open images (jpg/png/etc) with ffmpeg and getting
an AVPicture
In-Reply-To: <20121127221327.GI23348@arborea>
References: <CAKuDJW0WE6toCpoJozgkep1P1UeTFWe3uNdqf3G2macmAdHyQQ@mail.gmail.com>
<CAKuDJW03xt3Wn=k8NQRVtNFPo+R4CxLfzLWERnfWjE_mES93kA@mail.gmail.com>
<20121127221327.GI23348@arborea>
Message-ID: <CAKuDJW1EW5qtrYrNWbsYc9ZA-puRXN4wa+caM714xnzAu5103g@mail.gmail.com>

On Tue, Nov 27, 2012 at 8:13 PM, Stefano Sabatini <stefasab at gmail.com>wrote:

> On date Tuesday 2012-11-27 17:26:47 -0200, Wagner Patriota encoded:
> > Could somebody give me some direction?
> >
> > I am trying to use all the latest ffmpeg commands...
> > should I open it the same way I open a video? or there's some shortcut?
>
> Check how I did it in libavfilter/lavfutils.c:ff_load_image()
>
> I don't know if we should provide some generic utilities somewhere
> (maybe libavformat?).
>

exactly what I need! I will try... thank you!!! :-)


> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121127/fb7eb4f3/attachment.html>

From nkipe at tatapowersed.com Wed Nov 28 12:25:37 2012
From: nkipe at tatapowersed.com (Navin)
Date: Wed, 28 Nov 2012 16:55:37 +0530
Subject: [Libav-user] Is a buffer required while streaming using ffmpeg code?
Message-ID: <50B5F4B1.3060005@tatapowersed.com>

As per the ffmpeg tutorial code, for code like this:/
while(av_read_frame_proc(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStream)
{
avcodec_decode_video2_proc(pCodecCtx, pFrame, &frameFinished,
&packet);// Decode video frame
if (frameFinished)
{
sws_scale_proc(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data,
pFrameRGB->linesize );/

As a general question, if I'm receiving realtime webcam frames or from a
remote video file from a very fast network and I receive frames faster
than the framerate of the video, I'd need to buffer the frames in my own
datastructure. But if my datastructure is small, then won't I lose a lot
of frames that didn't get buffered? I'm asking because I read somewhere
that ffmpeg buffers a video internally. How does this buffering happen?
How would that buffering be any different than if I implemented my own
buffer? The chances that I'd lose frames is very real, isn't it? Any
chance I could read up or see some source code about this?
Or does all this depend on the streaming protocol being used?

--
Navin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/9c7b8efd/attachment.html>

From serdel at toya.net.pl Wed Nov 28 13:27:46 2012
From: serdel at toya.net.pl (serdel)
Date: Wed, 28 Nov 2012 13:27:46 +0100
Subject: [Libav-user] =?utf-8?q?avcodec=5Fdecode=5Fvideo2_strange_behaviou?=
=?utf-8?q?r_on_lost_frame?=
Message-ID: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>

Hi,

I am using ffmpeg to decode an rtps (live555) streaming. I decode each
frame separately in a standard sequence. The normal sequence is SPS and
PPS, then NAL_IDR_SLICE and finaly NAL_SLICE until next SPS&PPS. On normal
work, avcodec_decode_video2 returns positive value of bytes used and sets
the got_picture_ptr to the positive value. On SPS&PPS the return value is
-1 and got_picture_ptr is 0. This is a normal sequence and works perfectly
without any errors or lags. The problem appears when some packet is lost in
frame transmission (WiFi transmission with some noise around). Live555
rejects the rest of the packets for this frame so the entire frame is lost.
In almost all of the cases it is the NAL_IDR_SLICE frame since it is
usually about 20 times bigger than NAL_SLICE frames. So when the next
NAL_SLICE frame arrives complete the avcodec_decode_video2 returns positive
value of used bytes, but the got_picture_ptr is 0 - not a normal behaviour.
The worse thing is that this continues to happen to every next NAL_SLICE
frame and to the next NAL_IDR_SLICE with it's following NAL_SLICE frames.
This keeps on going a couple of 'full seqences' (SPS&PPS,NAL_IDR_SLIC and
following NAL_SLICE) and finally on some of the frames jumps bakc to normal
work (avcodec_decode_video2 return is positive and got_picture_ptr is also
non-zero positive). On this occur the screen picture freezes for 2-3
seconds and continuous running with no jump_over! You don't see any jump in
the screen which would indicate some lost frames - no, the picture keeps on
from where it froze and is just delayed to the sender stream by those 2-3
seconds.

I tried detecting this situation and when it occurs I run
avcodec_flush_buffers() followed by avcodec_decode_video2 with a dummy
empty AVPacket to somehow reset the coded state but it didn't help. Does
any one have any other suggestions?

From mark.kenna at sureviewsystems.com Wed Nov 28 16:13:22 2012
From: mark.kenna at sureviewsystems.com (Mark Kenna)
Date: Wed, 28 Nov 2012 15:13:22 +0000
Subject: [Libav-user] YV12
Message-ID: <CAAxE61g8agsqisFtvP8s2ORfV4sMhWOf6yRvhXUhUnGECLi88w@mail.gmail.com>

Hi Guys

Is there an inbuilt method of converting YV12 into YUV420p using the
scaler? I understand the U/V components are swapped but if there's
something built in I'd rather use that.

Thanks,
Mark.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/efa3d0bd/attachment.html>

From cehoyos at ag.or.at Wed Nov 28 16:36:01 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Wed, 28 Nov 2012 15:36:01 +0000 (UTC)
Subject: [Libav-user] YV12
References: <CAAxE61g8agsqisFtvP8s2ORfV4sMhWOf6yRvhXUhUnGECLi88w@mail.gmail.com>
Message-ID: <loom.20121128T163441-158@post.gmane.org>

Mark Kenna <mark.kenna at ...> writes:

> Is there an inbuilt method of converting YV12 into
> YUV420p using the scaler?

The scaler does not support it but it may be sufficient to
set the codec_tag correctly before decoding (difficult to
say without more information about your use case).

There is also a swapuv filter iirc.

Carl Eugen


From alexcohn at netvision.net.il Wed Nov 28 17:48:45 2012
From: alexcohn at netvision.net.il (Alex Cohn)
Date: Wed, 28 Nov 2012 18:48:45 +0200
Subject: [Libav-user] avcodec_decode_video2 strange behaviour on lost
frame
In-Reply-To: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>
References: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>
Message-ID: <CADiKgohGnaDrTej-kZsn8kkyiUMspiiuQ2ET3JuVcZB2rzNMFg@mail.gmail.com>

On Wed, Nov 28, 2012 at 2:27 PM, serdel <serdel at toya.net.pl> wrote:

> Hi,
>
> I am using ffmpeg to decode an rtps (live555) streaming. I decode each
> frame separately in a standard sequence. The normal sequence is SPS and
> PPS, then NAL_IDR_SLICE and finaly NAL_SLICE until next SPS&PPS. On normal
> work, avcodec_decode_video2 returns positive value of bytes used and sets
> the got_picture_ptr to the positive value. On SPS&PPS the return value is
> -1 and got_picture_ptr is 0. This is a normal sequence and works perfectly
> without any errors or lags. The problem appears when some packet is lost in
> frame transmission (WiFi transmission with some noise around). Live555
> rejects the rest of the packets for this frame so the entire frame is lost.
> In almost all of the cases it is the NAL_IDR_SLICE frame since it is
> usually about 20 times bigger than NAL_SLICE frames. So when the next
> NAL_SLICE frame arrives complete the avcodec_decode_video2 returns positive
> value of used bytes, but the got_picture_ptr is 0 - not a normal behaviour.
> The worse thing is that this continues to happen to every next NAL_SLICE
> frame and to the next NAL_IDR_SLICE with it's following NAL_SLICE frames.
> This keeps on going a couple of 'full seqences' (SPS&PPS,NAL_IDR_SLIC and
> following NAL_SLICE) and finally on some of the frames jumps bakc to normal
> work (avcodec_decode_video2 return is positive and got_picture_ptr is also
> non-zero positive). On this occur the screen picture freezes for 2-3
> seconds and continuous running with no jump_over! You don't see any jump in
> the screen which would indicate some lost frames - no, the picture keeps on
> from where it froze and is just delayed to the sender stream by those 2-3
> seconds.
>
> I tried detecting this situation and when it occurs I run
> avcodec_flush_buffers() followed by avcodec_decode_video2 with a dummy
> empty AVPacket to somehow reset the coded state but it didn't help. Does
> any one have any other suggestions?
>

It looks as if the decoder expects an I-frame after SPS&PPS, so it refuses
to accept P-frames. This does not readily explain your finding that it
takes a couple of "full sequences", but this is at least some start.

The remedy then would be to delay feeding SPS&PPS to decoder until you have
a complete I-frame from RTPS. If LIVE555 throwed this frame away, then
throw away SPS&PPS and proceed with the following frames.

Note that H264 stream is not error-resilient, and you can see all kinds of
artifacts when some frames are missing. Things can become especially nasty
if the stream uses B-frames (not in 'Ultrafast' profile). But at least this
way you will hopefully not have the movie freeze for few seconds.

Sincerely,
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/f379e717/attachment.html>

From igorm at stats.com Wed Nov 28 11:23:51 2012
From: igorm at stats.com (Morduhaev, Igor (igorm@stats.com))
Date: Wed, 28 Nov 2012 10:23:51 +0000
Subject: [Libav-user] Helo in understanding PTS and DTS
Message-ID: <F43D3702A40CBB40A1902FC1AD89990931F3BB@SEAVER.stats.local>

I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function:


1. AVFormatContext* outContainer = NULL;

2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4";

3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);

4. AVStream *outStream = avformat_new_stream(outContainer, encoder);

5. // outStream->codec initiation

6. // ...

7. avformat_write_header(outContainer, NULL);


8. // reading and decoding packet

9. // ...

10.avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame)

11.

12.if (encodedPacket.pts != AV_NOPTS_VALUE)

13.encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base);

14.if (encodedPacket.dts != AV_NOPTS_VALUE)

15.encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base);

16.

17.av_interleaved_write_frame(outContainer, &encodedPacket)


After reading many posts I still do not understand:
1. outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why?
When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why?
2. What is the difference between time_base of outStream->codec and outStream?
3. To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value?
4. At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

Thanks.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/d6e71757/attachment.html>

From igorm at stats.com Wed Nov 28 14:27:39 2012
From: igorm at stats.com (Morduhaev, Igor (igorm@stats.com))
Date: Wed, 28 Nov 2012 13:27:39 +0000
Subject: [Libav-user] Why container and codec has different time base?
Message-ID: <F43D3702A40CBB40A1902FC1AD89990931FBDB@SEAVER.stats.local>

I used ffmpeg to get video info. The output is
Duration: 00:05:57.00, start: 0.000000, bitrate: 611 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 808x610, 609 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc
The time base is used to somehow(this is also my another question) calculate when to decode and show the frame, right? So whose time base is used, container (12800) or codec (50)?
The another question is why tbn=12800 and not 90000?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/1f787a9c/attachment.html>

From noahzig at gmail.com Wed Nov 28 15:09:54 2012
From: noahzig at gmail.com (Noah Zigas)
Date: Wed, 28 Nov 2012 09:09:54 -0500
Subject: [Libav-user] Frame Rate Oddities
Message-ID: <CAM8Sd2ZEbdHw38S7hOMp+9avm9iiUYZ-vw48_34Q34R6gG+1jQ@mail.gmail.com>

Hi All,

I'm new to FFmpeg programming and I'm having a strange issue with H264 file
framerates.

I'm simply trying to capture video from an NTSC camera using a Matrox
capture card. I wrote a simple application to capture 10 seconds of video
to an MP4 file. The program works, sort of. I get an MP4 file but it
plays very quickly. If I look at the details of the file the framerate is
somewhere in the thousands (it changes based on the number of frames I
capture, which is also strange).

I'm using Win7/64 with FFmpeg compiled with MinGW64. I've tried the
pre-built binaries from zeranoe with the same results.

Here's the code:

#include <mil.h>
#include <iostream>
extern "C"
{
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libavutil/imgutils.h>
}

using namespace std;

const unsigned long RGB_BUFFER_SIZE = 640 * 480 * 3;

MIL_ID mil_buffers[10];
unsigned char *p_raw_buffer;
SwsContext *p_sws_context;
AVFrame *p_out_frame;
AVFormatContext *p_format_context;
AVStream *p_output_stream;

static MIL_INT MFTYPE hook_handler(MIL_INT hook_type, MIL_ID event_id, void
MPTYPE *p_user_data)
{
static unsigned long count = 0;

AVPicture picture_in;
AVPacket pkt;
int got_packet_ptr;

// get frame into BGR buffer
MIL_ID current_buf;
MdigGetHookInfo(event_id, M_MODIFIED_BUFFER+M_BUFFER_ID, &current_buf);
MbufGetColor(current_buf, M_PACKED+M_BGR24, M_ALL_BANDS, p_raw_buffer);

// fill the input picture with the BGR data and convert it to YUV420P
avpicture_fill(&picture_in, p_raw_buffer, PIX_FMT_BGR24, 640, 480);
sws_scale(p_sws_context, picture_in.data, picture_in.linesize, 0, 480,
p_out_frame->data, p_out_frame->linesize);

av_init_packet(&pkt);
AVCodecContext *p_codec_context = p_output_stream->codec;
pkt.data = NULL;
pkt.size = 0;
p_out_frame->pts = count;
int result = avcodec_encode_video2(p_codec_context, &pkt, p_out_frame,
&got_packet_ptr);
if (result < 0)
{
cout << "Error encoding frame." << endl;
}
if (got_packet_ptr)
{
av_interleaved_write_frame(p_format_context, &pkt);
}

++count;

return M_NULL;
} // hook_handler

int
main(int argc, char *argv[])
{
// MIL setup
MIL_ID mil_app = MappAlloc(M_DEFAULT, M_NULL);
MIL_ID mil_sys = MsysAlloc(M_SYSTEM_MORPHIS, M_DEFAULT, M_DEFAULT,
M_NULL);
MIL_ID mil_dig = MdigAlloc(mil_sys, M_DEFAULT, MIL_TEXT("M_NTSC"),
M_DEFAULT, M_NULL);
for (int x = 0; x < 10; ++x)
{
MbufAllocColor(mil_sys, 3, 640, 480, M_UNSIGNED+8, M_IMAGE+M_GRAB,
&mil_buffers[x]);
}
p_raw_buffer = new unsigned char[RGB_BUFFER_SIZE];
MdigChannel(mil_dig, M_CH4);

// FFmpeg setup
av_register_all();

// get the scaling context to transform input image to YUV420P
p_sws_context = sws_getContext(640, 480, PIX_FMT_BGR24, 640, 480,
PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

// allocate the output frame
p_out_frame = avcodec_alloc_frame();
p_out_frame->format = PIX_FMT_YUV420P;
p_out_frame->width = 640;
p_out_frame->height = 480;
av_image_alloc(p_out_frame->data, p_out_frame->linesize, 640, 480,
PIX_FMT_YUV420P, 1);
p_out_frame->pts = 0;

// allocate the output format context
avformat_alloc_output_context2(&p_format_context, NULL, NULL, "test.mp4");

AVOutputFormat *p_output_format = p_format_context->oformat;

AVCodec *p_codec = avcodec_find_encoder(CODEC_ID_H264);

// format the output video stream
p_output_stream = avformat_new_stream(p_format_context, p_codec);

AVCodecContext *p_codec_context = p_output_stream->codec;

// set up the codec context
avcodec_get_context_defaults3(p_codec_context, p_codec);
p_codec_context->codec_id = CODEC_ID_H264;
p_codec_context->bit_rate = 400000;
p_codec_context->width = 640;
p_codec_context->height = 480;
p_codec_context->qmin = 10;
p_codec_context->qmax = 42;
p_codec_context->time_base.den = 30;
p_codec_context->time_base.num = 1;
p_codec_context->gop_size = 10;
p_codec_context->pix_fmt = PIX_FMT_YUV420P;
p_codec_context->ticks_per_frame = 2;
p_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;

// open the codec context
avcodec_open2(p_codec_context, p_codec, NULL);

// open the io context
avio_open2(&p_format_context->pb, "test.mp4", AVIO_FLAG_WRITE, NULL,
NULL);

av_dump_format(p_format_context, 0, "test.mp4", 1);

// write the file header
avformat_write_header(p_format_context, NULL);

// process 300 frames (10 seconds of video)
cout << "Starting processing..." << endl << endl;
MdigProcess(mil_dig, mil_buffers, 9, M_SEQUENCE+M_COUNT(300), M_DEFAULT,
hook_handler, M_NULL);
cout << endl << "Processing done." << endl << "Press any key to
continue...";

// flush the (possibly) buffered frames to the output file
int got_packet_ptr;
do
{
AVPacket pkt;
pkt.data = NULL;
pkt.size = 0;
int result = avcodec_encode_video2(p_codec_context, &pkt, NULL,
&got_packet_ptr);
if (result < 0)
{
cout << "Error encoding frame." << endl;
}
if (got_packet_ptr)
{
av_interleaved_write_frame(p_format_context, &pkt);
av_free_packet(&pkt);
}
} while (got_packet_ptr);

// write the file trailer
av_write_trailer(p_format_context);

// clean up
avformat_free_context(p_format_context);
av_freep(&p_out_frame->data[0]);
av_free(p_out_frame);
sws_freeContext(p_sws_context);

delete [] p_raw_buffer;
for (int x = 0; x < 10; ++x)
{
MbufFree(mil_buffers[x]);
}
MdigFree(mil_dig);
MsysFree(mil_sys);
MappFree(mil_app);

getchar();
return 0;
} // main

Any help would be greatly appreciated.

Thanks,
Noah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/2b83aec5/attachment.html>

From serdel at toya.net.pl Wed Nov 28 19:20:34 2012
From: serdel at toya.net.pl (serdel)
Date: Wed, 28 Nov 2012 19:20:34 +0100
Subject: [Libav-user]
=?utf-8?q?avcodec=5Fdecode=5Fvideo2_strange_behaviou?=
=?utf-8?q?r_on_lost_frame?=
In-Reply-To: <CADiKgohGnaDrTej-kZsn8kkyiUMspiiuQ2ET3JuVcZB2rzNMFg@mail.gmail.com>
References: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>
<CADiKgohGnaDrTej-kZsn8kkyiUMspiiuQ2ET3JuVcZB2rzNMFg@mail.gmail.com>
Message-ID: <330bb4b91f32ed2a0017a19b6d4f1399@toya.net.pl>

I implemented a simple algorithm which kept track of previous frame and the
current one and checked if the sequence is correct. If it wasn't I didn't
put the frame to the avcodec_decode_video2 function and stayed that way
until next sequence has started. It brought the desired results until one
unlucky moment where there was no missing packets in live555, but the main
frame still got decoded incorrectly (positive return value and 0 as
got_picture_ptr) and the same situation all over again. Why doesn't
flushing the buffer help here? I am now thinking about reinitializing the
whole ffmpeg on that occurrence, but it seems brutal...


On Wed, 28 Nov 2012 18:48:45 +0200, Alex Cohn <alexcohn at netvision.net.il>
wrote:
> On Wed, Nov 28, 2012 at 2:27 PM, serdel <serdel at toya.net.pl> wrote:
>
>> Hi,
>>
>> I am using ffmpeg to decode an rtps (live555) streaming. I decode each
>> frame separately in a standard sequence. The normal sequence is SPS and
>> PPS, then NAL_IDR_SLICE and finaly NAL_SLICE until next SPS&PPS. On
>> normal
>> work, avcodec_decode_video2 returns positive value of bytes used and
sets
>> the got_picture_ptr to the positive value. On SPS&PPS the return value
is
>> -1 and got_picture_ptr is 0. This is a normal sequence and works
>> perfectly
>> without any errors or lags. The problem appears when some packet is lost
>> in
>> frame transmission (WiFi transmission with some noise around). Live555
>> rejects the rest of the packets for this frame so the entire frame is
>> lost.
>> In almost all of the cases it is the NAL_IDR_SLICE frame since it is
>> usually about 20 times bigger than NAL_SLICE frames. So when the next
>> NAL_SLICE frame arrives complete the avcodec_decode_video2 returns
>> positive
>> value of used bytes, but the got_picture_ptr is 0 - not a normal
>> behaviour.
>> The worse thing is that this continues to happen to every next NAL_SLICE
>> frame and to the next NAL_IDR_SLICE with it's following NAL_SLICE
frames.
>> This keeps on going a couple of 'full seqences' (SPS&PPS,NAL_IDR_SLIC
and
>> following NAL_SLICE) and finally on some of the frames jumps bakc to
>> normal
>> work (avcodec_decode_video2 return is positive and got_picture_ptr is
>> also
>> non-zero positive). On this occur the screen picture freezes for 2-3
>> seconds and continuous running with no jump_over! You don't see any jump
>> in
>> the screen which would indicate some lost frames - no, the picture keeps
>> on
>> from where it froze and is just delayed to the sender stream by those
2-3
>> seconds.
>>
>> I tried detecting this situation and when it occurs I run
>> avcodec_flush_buffers() followed by avcodec_decode_video2 with a dummy
>> empty AVPacket to somehow reset the coded state but it didn't help. Does
>> any one have any other suggestions?
>>
>
> It looks as if the decoder expects an I-frame after SPS&PPS, so it
refuses
> to accept P-frames. This does not readily explain your finding that it
> takes a couple of "full sequences", but this is at least some start.
>
> The remedy then would be to delay feeding SPS&PPS to decoder until you
have
> a complete I-frame from RTPS. If LIVE555 throwed this frame away, then
> throw away SPS&PPS and proceed with the following frames.
>
> Note that H264 stream is not error-resilient, and you can see all kinds
of
> artifacts when some frames are missing. Things can become especially
nasty
> if the stream uses B-frames (not in 'Ultrafast' profile). But at least
this
> way you will hopefully not have the movie freeze for few seconds.
>
> Sincerely,
> Alex

From alexcohn at netvision.net.il Wed Nov 28 20:15:07 2012
From: alexcohn at netvision.net.il (Alex Cohn)
Date: Wed, 28 Nov 2012 21:15:07 +0200
Subject: [Libav-user] avcodec_decode_video2 strange behaviour on lost
frame
In-Reply-To: <330bb4b91f32ed2a0017a19b6d4f1399@toya.net.pl>
References: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>
<CADiKgohGnaDrTej-kZsn8kkyiUMspiiuQ2ET3JuVcZB2rzNMFg@mail.gmail.com>
<330bb4b91f32ed2a0017a19b6d4f1399@toya.net.pl>
Message-ID: <CADiKgoi9SchXHz1qQ4qx7c+-Vp+g6LuPv7FkWRZCVK3o4=4ShA@mail.gmail.com>

On 28 Nov 2012 20:21, "serdel" <serdel at toya.net.pl> wrote:
>
> I implemented a simple algorithm which kept track of previous frame and
the
> current one and checked if the sequence is correct. If it wasn't I didn't
> put the frame to the avcodec_decode_video2 function and stayed that way
> until next sequence has started. It brought the desired results until one
> unlucky moment where there was no missing packets in live555, but the main
> frame still got decoded incorrectly (positive return value and 0 as
> got_picture_ptr) and the same situation all over again. Why doesn't
> flushing the buffer help here? I am now thinking about reinitializing the
> whole ffmpeg on that occurrence, but it seems brutal...
>
>
> On Wed, 28 Nov 2012 18:48:45 +0200, Alex Cohn <alexcohn at netvision.net.il>
> wrote:
> > On Wed, Nov 28, 2012 at 2:27 PM, serdel <serdel at toya.net.pl> wrote:
> >
> >> Hi,
> >>
> >> I am using ffmpeg to decode an rtps (live555) streaming. I decode each
> >> frame separately in a standard sequence. The normal sequence is SPS and
> >> PPS, then NAL_IDR_SLICE and finaly NAL_SLICE until next SPS&PPS. On
> >> normal
> >> work, avcodec_decode_video2 returns positive value of bytes used and
> sets
> >> the got_picture_ptr to the positive value. On SPS&PPS the return value
> is
> >> -1 and got_picture_ptr is 0. This is a normal sequence and works
> >> perfectly
> >> without any errors or lags. The problem appears when some packet is
lost
> >> in
> >> frame transmission (WiFi transmission with some noise around). Live555
> >> rejects the rest of the packets for this frame so the entire frame is
> >> lost.
> >> In almost all of the cases it is the NAL_IDR_SLICE frame since it is
> >> usually about 20 times bigger than NAL_SLICE frames. So when the next
> >> NAL_SLICE frame arrives complete the avcodec_decode_video2 returns
> >> positive
> >> value of used bytes, but the got_picture_ptr is 0 - not a normal
> >> behaviour.
> >> The worse thing is that this continues to happen to every next
NAL_SLICE
> >> frame and to the next NAL_IDR_SLICE with it's following NAL_SLICE
> frames.
> >> This keeps on going a couple of 'full seqences' (SPS&PPS,NAL_IDR_SLIC
> and
> >> following NAL_SLICE) and finally on some of the frames jumps bakc to
> >> normal
> >> work (avcodec_decode_video2 return is positive and got_picture_ptr is
> >> also
> >> non-zero positive). On this occur the screen picture freezes for 2-3
> >> seconds and continuous running with no jump_over! You don't see any
jump
> >> in
> >> the screen which would indicate some lost frames - no, the picture
keeps
> >> on
> >> from where it froze and is just delayed to the sender stream by those
> 2-3
> >> seconds.
> >>
> >> I tried detecting this situation and when it occurs I run
> >> avcodec_flush_buffers() followed by avcodec_decode_video2 with a dummy
> >> empty AVPacket to somehow reset the coded state but it didn't help.
Does
> >> any one have any other suggestions?
> >>
> >
> > It looks as if the decoder expects an I-frame after SPS&PPS, so it
> refuses
> > to accept P-frames. This does not readily explain your finding that it
> > takes a couple of "full sequences", but this is at least some start.
> >
> > The remedy then would be to delay feeding SPS&PPS to decoder until you
> have
> > a complete I-frame from RTPS. If LIVE555 throwed this frame away, then
> > throw away SPS&PPS and proceed with the following frames.
> >
> > Note that H264 stream is not error-resilient, and you can see all kinds
> of
> > artifacts when some frames are missing. Things can become especially
> nasty
> > if the stream uses B-frames (not in 'Ultrafast' profile). But at least
> this
> > way you will hopefully not have the movie freeze for few seconds.
> >
> > Sincerely,
> > Alex
>

So, you sometimes receive frames that fail to decode. How often does it
happen? And what is your GOP length and FPS?

BR,
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/703c82c4/attachment.html>

From adrozdoff at gmail.com Thu Nov 29 00:04:17 2012
From: adrozdoff at gmail.com (hatred)
Date: Thu, 29 Nov 2012 10:04:17 +1100
Subject: [Libav-user] Why container and codec has different time base?
In-Reply-To: <F43D3702A40CBB40A1902FC1AD89990931FBDB@SEAVER.stats.local>
References: <F43D3702A40CBB40A1902FC1AD89990931FBDB@SEAVER.stats.local>
Message-ID: <CAP9VioeXH-PCLo2CkLcN9OaCqwW=cioFAWhbjLr08KXcQQRi8A@mail.gmail.com>

Hi Igor,

As I known some formats require specific timebases (like MPEGTS streams)
so, you can not manipulate with them. But you can manipulate with
CodecContext and can set time base that you need. For example, if FPS equal
to 25, you can set coder time base to 1/25 and simple increment PTS by 1
for every next frame.

If you known russian, you can read my note regard time-bases and PTS:
http://htrd.su/wiki/zhurnal/2012/11/23/ffmpeg_nemnogo_pro_time-base

Briefly:
AVPacket PTS and DTS filelds readed from stream have a Stream time-base
AVFrame PTS field decoded from Packet have a Coder time-base

For encoding all same.

It is my opinion after reading ffmpeg docs and code so more detail info
from ffmpeg develop team welcomed

2012/11/29 Morduhaev, Igor (igorm at stats.com) <igorm at stats.com>

> I used ffmpeg to get video info. The output is
>
> Duration: 00:05:57.00, start: 0.000000, bitrate: 611 kb/s Stream
> #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 808x610, 609
> kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc
>
> The time base is used to somehow(this is also my another question)
> calculate when to decode and show the frame, right? So whose time base is
> used, container (12800) or codec (50)?
>
> The another question is why tbn=12800 and not 90000?
>
>
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/952c05ad/attachment.html>

From adrozdoff at gmail.com Thu Nov 29 00:23:52 2012
From: adrozdoff at gmail.com (hatred)
Date: Thu, 29 Nov 2012 10:23:52 +1100
Subject: [Libav-user] Helo in understanding PTS and DTS
In-Reply-To: <F43D3702A40CBB40A1902FC1AD89990931F3BB@SEAVER.stats.local>
References: <F43D3702A40CBB40A1902FC1AD89990931F3BB@SEAVER.stats.local>
Message-ID: <CAP9Vioeib4vWdT672gpw=9G0tyUR+LERE2dLKQOeoFAO=LAB6Q@mail.gmail.com>

Hi Igor, my comments below

2012/11/28 Morduhaev, Igor (igorm at stats.com) <igorm at stats.com>

> I had fps issues when transcoding from avi to mp4(x264). Eventually the
> problem was in PTS and DTS values, so lines 12-15 where added before
> av_interleaved_write_frame function:
>
>
>
> 1. AVFormatContext* outContainer = NULL;
>
> 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4",
> "c:\\test.mp4";
>
> 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
>
> 4. AVStream *outStream = avformat_new_stream(outContainer, encoder);
>
> 5. // outStream->codec initiation
>
> 6. // ...
>
> 7. avformat_write_header(outContainer, NULL);
>
>
>
> 8. // reading and decoding packet
>
> 9. // ...
>
> 10.avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame,
> &got_frame)
>
> 11.
>
> 12.if (encodedPacket.pts != AV_NOPTS_VALUE)
>
> 13.encodedPacket.pts = av_rescale_q(encodedPacket.pts,
> outStream->codec->time_base, outStream->time_base);
>
> 14.if (encodedPacket.dts != AV_NOPTS_VALUE)
>
> 15.encodedPacket.dts = av_rescale_q(encodedPacket.dts,
> outStream->codec->time_base, outStream->time_base);
>
> 16.
>
> 17.av_interleaved_write_frame(outContainer, &encodedPacket)
>
>
>
> After reading many posts I still do not understand:
>
> 1. outStream->codec->time_base = 1/25 and outStream->time_base =
> 1/12800. The 1st one was set by me but I cannot figure out why and who set
> 12800? I noticed that before line (7) outStream->time_base = 1/90000 and
> right after it it changes to 1/12800, why?
>

Read docs:
1. AVStream::time_base:
This is the fundamental unit of time (in seconds) in terms
of which frame timestamps are represented.

decoding: set by libavformat
encoding: set by libavformat in av_write_header. The muxer may use the
user-provided value of @ref AVCodecContext.time_base "codec->time_base"
as a hint.

So for AVStream you shold not set time-base - it will be setted automatic
after av_write_header() call

2. AVCodecContext::time_base:
This is the fundamental unit of time (in seconds) in terms
of which frame timestamps are represented. For fixed-fps content,
timebase should be 1/framerate and timestamp increments should be
identically 1.
- encoding: MUST be set by user.
- decoding: Set by libavcodec.

So for AVCodecContext you MUST set time-base for encoding

When I transcode from avi to avi, meaning changing the line (2) to
avformat_alloc_output_context2(&outContainer,
> NULL, "avi", "c:\\test.avi"; , so before and after line (7)
> outStream->time_base remains always 1/25 and not like in mp4 case, why?
>

see above

2. What is the difference between time_base of outStream->codec and
> outStream?
>

see above


> 3. To calc the pts av_rescale_q does: takes 2 time_base, multiplies
> their fractions in cross and then compute the pts. Why it does this in this
> way? As I debugged, the encodedPacket.pts has value incremental by 1, so
> why changing it if it does has value?
>

Generaly coder time base is inverse Frame Rate, so we can increment PTS
simple by 1 for next frame, but Stream time base can depend on some
format/codec specifications. Packets PTS/DTS must be in Stream time-base
units before writing so rescaling between coder and stream time bases is
required.


> 4. At the beginning the dts value is -2 and after each rescaling it
> still has negative number, but despite this the video played correctly!
> Shouldn't it be positive?
>

Its strange for me :-)


>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/265e9d3f/attachment.html>

From cbhattac at adobe.com Thu Nov 29 04:18:49 2012
From: cbhattac at adobe.com (Chandranath Bhattacharyya)
Date: Thu, 29 Nov 2012 08:48:49 +0530
Subject: [Libav-user] Helo in understanding PTS and DTS
In-Reply-To: <F43D3702A40CBB40A1902FC1AD89990931F3BB@SEAVER.stats.local>
References: <F43D3702A40CBB40A1902FC1AD89990931F3BB@SEAVER.stats.local>
Message-ID: <5DA090DC8DCD3549BE9DF08C5074509E0160830959@indiambx01.corp.adobe.com>

>>outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why?

I was testing on Windows and found this problem crop up in builds after 3rd Oct in http://ffmpeg.zeranoe.com/builds/. So, I had reverted back to build of 3rd Oct and outStream->time_base was correct. This was more than a month back. The latest builds may have fixed this issue.

Regards,
Chandranath

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/851b11a4/attachment.html>

From serdel at toya.net.pl Thu Nov 29 09:09:55 2012
From: serdel at toya.net.pl (serdel)
Date: Thu, 29 Nov 2012 09:09:55 +0100
Subject: [Libav-user]
=?utf-8?q?avcodec=5Fdecode=5Fvideo2_strange_behaviou?=
=?utf-8?q?r_on_lost_frame?=
In-Reply-To: <CADiKgoi9SchXHz1qQ4qx7c+-Vp+g6LuPv7FkWRZCVK3o4=4ShA@mail.gmail.com>
References: <afa7a92816a8d2bfdd1975dec32e733f@toya.net.pl>
<CADiKgohGnaDrTej-kZsn8kkyiUMspiiuQ2ET3JuVcZB2rzNMFg@mail.gmail.com>
<330bb4b91f32ed2a0017a19b6d4f1399@toya.net.pl>
<CADiKgoi9SchXHz1qQ4qx7c+-Vp+g6LuPv7FkWRZCVK3o4=4ShA@mail.gmail.com>
Message-ID: <4215b6757f75b9c202f15b3b51cfb0f3@toya.net.pl>

Actually I yet I have no idea why do the frames get corrupted (at least
this is what I suspect..). When I saw the packet loss it was obvious
because it was enough for the whole frame to be dropped by live555. But why
there is no packet loss and still the frame does not get decoded... that is
still a riddle. However reinitialising the coded at this unusual situation
seems to help, but still I need more testing here.

On Wed, 28 Nov 2012 21:15:07 +0200, Alex Cohn <alexcohn at netvision.net.il>
wrote:
> On 28 Nov 2012 20:21, "serdel" <serdel at toya.net.pl> wrote:
>>
>> I implemented a simple algorithm which kept track of previous frame and
> the
>> current one and checked if the sequence is correct. If it wasn't I
didn't
>> put the frame to the avcodec_decode_video2 function and stayed that way
>> until next sequence has started. It brought the desired results until
one
>> unlucky moment where there was no missing packets in live555, but the
>> main
>> frame still got decoded incorrectly (positive return value and 0 as
>> got_picture_ptr) and the same situation all over again. Why doesn't
>> flushing the buffer help here? I am now thinking about reinitializing
the
>> whole ffmpeg on that occurrence, but it seems brutal...
>>
>>
>> On Wed, 28 Nov 2012 18:48:45 +0200, Alex Cohn
<alexcohn at netvision.net.il>
>> wrote:
>> > On Wed, Nov 28, 2012 at 2:27 PM, serdel <serdel at toya.net.pl> wrote:
>> >
>> >> Hi,
>> >>
>> >> I am using ffmpeg to decode an rtps (live555) streaming. I decode
each
>> >> frame separately in a standard sequence. The normal sequence is SPS
>> >> and
>> >> PPS, then NAL_IDR_SLICE and finaly NAL_SLICE until next SPS&PPS. On
>> >> normal
>> >> work, avcodec_decode_video2 returns positive value of bytes used and
>> sets
>> >> the got_picture_ptr to the positive value. On SPS&PPS the return
value
>> is
>> >> -1 and got_picture_ptr is 0. This is a normal sequence and works
>> >> perfectly
>> >> without any errors or lags. The problem appears when some packet is
> lost
>> >> in
>> >> frame transmission (WiFi transmission with some noise around).
Live555
>> >> rejects the rest of the packets for this frame so the entire frame is
>> >> lost.
>> >> In almost all of the cases it is the NAL_IDR_SLICE frame since it is
>> >> usually about 20 times bigger than NAL_SLICE frames. So when the next
>> >> NAL_SLICE frame arrives complete the avcodec_decode_video2 returns
>> >> positive
>> >> value of used bytes, but the got_picture_ptr is 0 - not a normal
>> >> behaviour.
>> >> The worse thing is that this continues to happen to every next
> NAL_SLICE
>> >> frame and to the next NAL_IDR_SLICE with it's following NAL_SLICE
>> frames.
>> >> This keeps on going a couple of 'full seqences' (SPS&PPS,NAL_IDR_SLIC
>> and
>> >> following NAL_SLICE) and finally on some of the frames jumps bakc to
>> >> normal
>> >> work (avcodec_decode_video2 return is positive and got_picture_ptr is
>> >> also
>> >> non-zero positive). On this occur the screen picture freezes for 2-3
>> >> seconds and continuous running with no jump_over! You don't see any
> jump
>> >> in
>> >> the screen which would indicate some lost frames - no, the picture
> keeps
>> >> on
>> >> from where it froze and is just delayed to the sender stream by those
>> 2-3
>> >> seconds.
>> >>
>> >> I tried detecting this situation and when it occurs I run
>> >> avcodec_flush_buffers() followed by avcodec_decode_video2 with a
>> >> dummy
>> >> empty AVPacket to somehow reset the coded state but it didn't help.
> Does
>> >> any one have any other suggestions?
>> >>
>> >
>> > It looks as if the decoder expects an I-frame after SPS&PPS, so it
>> refuses
>> > to accept P-frames. This does not readily explain your finding that it
>> > takes a couple of "full sequences", but this is at least some start.
>> >
>> > The remedy then would be to delay feeding SPS&PPS to decoder until you
>> have
>> > a complete I-frame from RTPS. If LIVE555 throwed this frame away, then
>> > throw away SPS&PPS and proceed with the following frames.
>> >
>> > Note that H264 stream is not error-resilient, and you can see all
kinds
>> of
>> > artifacts when some frames are missing. Things can become especially
>> nasty
>> > if the stream uses B-frames (not in 'Ultrafast' profile). But at least
>> this
>> > way you will hopefully not have the movie freeze for few seconds.
>> >
>> > Sincerely,
>> > Alex
>>
>
> So, you sometimes receive frames that fail to decode. How often does it
> happen? And what is your GOP length and FPS?
>
> BR,
> Alex

From rafi.fisher at wywy.com Thu Nov 29 11:53:10 2012
From: rafi.fisher at wywy.com (Rafi Fisher)
Date: Thu, 29 Nov 2012 12:53:10 +0200
Subject: [Libav-user] hello i have some developement question as for the
decoding-encoding process
Message-ID: <CAOr2bacv0drqr_FvXWL46t=Ojc=Y63zxXK9_EzVAzdk9-jDfZg@mail.gmail.com>

i 'm trying to convert an mp2 audia file into wav file format.
i.e decode mp2 file & then encode to wav file format.
using the ffmpeg libraries under windows & visual studio ide.

it works fine as long as i dont use different "frame_rate" of the one
that was used in the source mp2 file.

if i try to encode the dest. wav file with different frame rate then the
source (mp2 file)
& try to play the file it sound either too quick or too slow.

so i dont know how for example to convert from mp2 sample rate = 16000
to wav format 44000.


thanks a lot
rafi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/3a0f7742/attachment.html>

From cehoyos at ag.or.at Thu Nov 29 13:24:03 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 29 Nov 2012 12:24:03 +0000 (UTC)
Subject: [Libav-user]
=?utf-8?q?hello_i_have_some_developement_question_as?=
=?utf-8?q?_for_the=09decoding-encoding_process?=
References: <CAOr2bacv0drqr_FvXWL46t=Ojc=Y63zxXK9_EzVAzdk9-jDfZg@mail.gmail.com>
Message-ID: <loom.20121129T132332-801@post.gmane.org>

Rafi Fisher <rafi.fisher at ...> writes:

> so i dont know how for example to convert from mp2
> sample rate = 16000?to wav format 44000.

Use swresample to change audio sample rate.

Carl Eugen


From rafi.fisher at wywy.com Thu Nov 29 13:40:39 2012
From: rafi.fisher at wywy.com (Rafi Fisher)
Date: Thu, 29 Nov 2012 14:40:39 +0200
Subject: [Libav-user] hello i have some developement question as for the
decoding-encoding process
In-Reply-To: <loom.20121129T132332-801@post.gmane.org>
References: <CAOr2bacv0drqr_FvXWL46t=Ojc=Y63zxXK9_EzVAzdk9-jDfZg@mail.gmail.com>
<loom.20121129T132332-801@post.gmane.org>
Message-ID: <CAOr2bae0uh1Ka+fjHBYz_1KfG0b6BBOvWPJkK6LHsYeT+njgKA@mail.gmail.com>

hi carl

thanks for the answer. where can i find some sample code how to use the
swsresampe api's

thanks rafi

On Thu, Nov 29, 2012 at 2:24 PM, Carl Eugen Hoyos <cehoyos at ag.or.at> wrote:

> Rafi Fisher <rafi.fisher at ...> writes:
>
> > so i dont know how for example to convert from mp2
> > sample rate = 16000 to wav format 44000.
>
> Use swresample to change audio sample rate.
>
> Carl Eugen
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/cba1be41/attachment.html>

From brian at caelumspace.com Thu Nov 29 14:07:16 2012
From: brian at caelumspace.com (Brian Martin)
Date: Thu, 29 Nov 2012 08:07:16 -0500
Subject: [Libav-user] How to connect to an IP camera in libav
Message-ID: <005301cdce32$7705fc20$6511f460$@caelumspace.com>

Hello,



I'm trying to connect to a Foscam MJpeg IP camera (http) using libav on
Windows and am getting an error code of -5 after



AVFormatContext *pFormatCtx=NULL;

int i, videoStream;

AVCodecContext *pCodecCtx;

AVCodec *pCodec;

AVFrame *pFrame;

AVFrame *pFrameRGB;

AVPacket packet;

int frameFinished;

int numBytes;

uint8_t *buffer;

struct SwsContext *img_convert_ctx;

int frameCount=0;



if(!videoStreamAddress) {

printf("Please provide a movie file\n");

return -1;

}

// Register all formats and codecs

av_register_all();

avformat_network_init();



//Open video file

if(avformat_open_input(&pFormatCtx, videoStreamAddress,
NULL, NULL)!=0)

{

printf("Couldn't open file %s\n",
videoStreamAddress);

return -1; // Couldn't open file

}



I tried to get the string value of this error but am getting a write
protected memory error using the built-in av_strerror command.

Any ideas would be appreciated. Thanks!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121129/baad0226/attachment.html>

From cehoyos at ag.or.at Thu Nov 29 21:17:17 2012
From: cehoyos at ag.or.at (Carl Eugen Hoyos)
Date: Thu, 29 Nov 2012 20:17:17 +0000 (UTC)
Subject: [Libav-user] hello i have some developement question as for the
decoding-encoding process
References: <CAOr2bacv0drqr_FvXWL46t=Ojc=Y63zxXK9_EzVAzdk9-jDfZg@mail.gmail.com>
<loom.20121129T132332-801@post.gmane.org>
<CAOr2bae0uh1Ka+fjHBYz_1KfG0b6BBOvWPJkK6LHsYeT+njgKA@mail.gmail.com>
Message-ID: <loom.20121129T211558-716@post.gmane.org>

Rafi Fisher <rafi.fisher at ...> writes:

> thanks for the answer. where can i find some
> sample code how to use the?swsresampe api's

See doc/examples/filtering_audio.c

Please do not top-post here, Carl Eugen


From rafi.fisher at wywy.com Fri Nov 30 20:51:18 2012
From: rafi.fisher at wywy.com (Rafi Fisher)
Date: Fri, 30 Nov 2012 21:51:18 +0200
Subject: [Libav-user] hello i need help in using swreample lib.
Message-ID: <CAOr2badVExG3fSus=jaFrPK603yRxNAgkW3CVJjduogUrafqpw@mail.gmail.com>

hello

i try to trancode audio frames decoded from mp2 file into wav file.

so from what i understand

my input buffer should be the: *uint8_t *data *field in the*AVFrame
* structure

my output buffer should be allocated using :

int* av_samples_alloc*(uint8_t **audio_data, int *linesize, int
nb_channels, int nb_samples, enum AVSampleFormat sample_fmt, int align);

what size should i put in* nb_samples* parameter ?

then i understand i should call :

int *swr_convert*(struct SwrContext *s, uint8_t **out, int
out_count, const uint8_t **in , int in_count);

and pass* AVFrame data* filed as input buffer & the allocated
samples buffer as the output buffer.

how do i calculate the* in_count & the out_count*. parameters.


any help will be appreciated . thanks rafi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121130/a8e03aa8/attachment.html>

From dejan.crnila at dewesoft.com Fri Nov 30 14:03:54 2012
From: dejan.crnila at dewesoft.com (=?UTF-8?Q?Dejan_=C4=8Crnila?=)
Date: Fri, 30 Nov 2012 14:03:54 +0100
Subject: [Libav-user] Writing raw images to AVI file
Message-ID: <CANx_m_37Q+YOwrDEUjKS=0pReKvFs5_N1bY=vbJz-CfHmpjcdw@mail.gmail.com>

Hi,

I'm developing a plugin for writing raw images to uncompressed AVI. The
problem is that if file is larger than 1GB, windows media player cannot
open it, but media player classic can. Additionaly writing is stopped once
file size reaches 2GB and seeking does not work anymore. AVI muxer in
FFMPEG is supposed to support OpenDML AVI 2.0, right?

Any ideas?

Source code
----------------------------
#define RET(x) { res = x; goto failed;}

HRESULT __stdcall CFFMPEGVideoStoreEngine::CreateFile(BSTR fileName, int
pixelFormat, int width, int height, double frameRate)
{
DBGPRINT(_T("CFFMPEGVideoStoreEngine::CreateFile called"));
DBGPRINT(_T(" param fileName: %s"), fileName);

HRESULT res;
_bstr_t fName;
fName.Assign(fileName);

if (!g_registered)
{
av_register_all();
g_registered = true;
}

m_lastErrNum = avformat_alloc_output_context2(&m_pFmtCtx, NULL, NULL,
fName);
if (m_lastErrNum < 0)
RET(FFMPEG_E_NOOUTPUTFORMAT);

AVOutputFormat* pOutFmt = m_pFmtCtx->oformat;

AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_RAWVIDEO);
m_pStream = avformat_new_stream(m_pFmtCtx, codec);
if (!m_pStream)
RET(FFMPEG_E_CREATESTREAMFAILED);
m_pCodCtx = m_pStream->codec;

avcodec_get_context_defaults3(m_pCodCtx, codec);

m_pCodCtx->width = width;
m_pCodCtx->height = height;
m_pCodCtx->time_base = av_inv_q(av_d2q(frameRate, 1000));
m_pCodCtx->pix_fmt = (AVPixelFormat) pixelFormat;
if (pOutFmt->flags & AVFMT_GLOBALHEADER)
m_pCodCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

m_lastErrNum = avcodec_open2(m_pCodCtx, codec, NULL);
if (m_lastErrNum < 0)
RET(FFMPEG_E_OPENCODECFAILED);

m_pFrame = avcodec_alloc_frame();
if (!m_pFrame)
RET(FFMPEG_E_ALLOCFRAMEFAILED);

int picSize = avpicture_get_size(m_pCodCtx->pix_fmt, m_pCodCtx->width,
m_pCodCtx->height);

m_pFrame->format = m_pCodCtx->pix_fmt;
m_pFrame->width = m_pCodCtx->width;
m_pFrame->height = m_pCodCtx->height;

if (!(pOutFmt->flags & AVFMT_NOFILE))
{
m_lastErrNum = avio_open(&m_pFmtCtx->pb, fName, AVIO_FLAG_WRITE);
if (m_lastErrNum < 0)
RET(FFMPEG_E_CREATEFILEFAILED);
}

m_lastErrNum = avformat_write_header(m_pFmtCtx, NULL);
if (m_lastErrNum < 0)
{
avio_close(m_pFmtCtx->pb);
RET(FFMPEG_E_WRITEFILEFAILED);
}

DBGPRINT(_T("CFFMPEGVideoLoadEngine::CreateFile succeeded"));
return S_OK;

failed:
DBGPRINT(_T("CFFMPEGVideoLoadEngine::OpenFile failed (exit code %x)"), res);

FreeResources();

return res;
}

HRESULT __stdcall CFFMPEGVideoStoreEngine::WriteFrame(void* pData, int len)
{
DBGPRINT(_T("CFFMPEGVideoStoreEngine::WriteFrame called"));

HRESULT res;

AVPacket pkt;
int got_output;

avpicture_fill((AVPicture*) m_pFrame, (uint8_t*) pData, m_pCodCtx->pix_fmt,
m_pCodCtx->width, m_pCodCtx->height);

av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;

m_lastErrNum = avcodec_encode_video2(m_pCodCtx, &pkt, m_pFrame,
&got_output);
if (m_lastErrNum < 0)
RET(FFMPEG_E_ENCODEFAILED);

if (got_output)
{
pkt.pts = pkt.dts = m_cnt++;
if (m_pCodCtx->coded_frame->key_frame)
pkt.flags |= AV_PKT_FLAG_KEY;

pkt.stream_index = m_pStream->index;

m_lastErrNum = av_write_frame(m_pFmtCtx, &pkt);
if (m_lastErrNum < 0)
RET(FFMPEG_E_WRITEFILEFAILED);

av_free_packet(&pkt);
}

DBGPRINT(_T("CFFMPEGVideoLoadEngine::WriteFrame succeeded"));
return S_OK;

failed:
DBGPRINT(_T("CFFMPEGVideoLoadEngine::WriteFrame failed (exit code %x)"),
res);

return res;
}

HRESULT __stdcall CFFMPEGVideoStoreEngine::CloseFile()
{
DBGPRINT(_T("CFFMPEGVideoStoreEngine::CloseFile called"));

// Close file

av_write_trailer(m_pFmtCtx);

if (!(m_pFmtCtx->oformat->flags & AVFMT_NOFILE))
avio_close(m_pFmtCtx->pb);

FreeResources();

DBGPRINT(_T("CFFMPEGVideoLoadEngine::CloseFile succeeded"));
return S_OK;
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121130/f2ab5a42/attachment.html>

From moiein2000 at gmail.com Fri Nov 30 22:36:01 2012
From: moiein2000 at gmail.com (Matthew Einhorn)
Date: Fri, 30 Nov 2012 16:36:01 -0500
Subject: [Libav-user] Uploading large video files to bugtracker/ftp?
Message-ID: <CALCauYcF2mLXwFSuiBeSSTtGTEsQv6zbKYSkMDXRT+K=05ozMA@mail.gmail.com>

Hi,

Sorry if this has been answered elsewhere. I have a .mov file that's
~300MB. I'm trying to create a bug report for this file because there are
issues when converting it to rawvideo using the latest ffmpeg (older ffmpeg
cannot play them at all and vlc still can't). However, I'm unsure how to
supply the video file.

There's an option to attach a file to the bug report, but I imagine 300MB
is too much. On https://ffmpeg.org/bugreports.html it says to use dd to cut
the file before uploading to the ftp server. However, dd completely
destroys the file because it just copies a chunk without paying attention
to internal structures.

Thanks,
Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121130/1825fba9/attachment.html>

From stefasab at gmail.com Fri Nov 30 23:47:17 2012
From: stefasab at gmail.com (Stefano Sabatini)
Date: Fri, 30 Nov 2012 23:47:17 +0100
Subject: [Libav-user] hello i need help in using swreample lib.
In-Reply-To: <CAOr2badVExG3fSus=jaFrPK603yRxNAgkW3CVJjduogUrafqpw@mail.gmail.com>
References: <CAOr2badVExG3fSus=jaFrPK603yRxNAgkW3CVJjduogUrafqpw@mail.gmail.com>
Message-ID: <20121130224717.GB10526@arborea>

On date Friday 2012-11-30 21:51:18 +0200, Rafi Fisher encoded:
> hello
>
> i try to trancode audio frames decoded from mp2 file into wav file.
>
> so from what i understand
>
> my input buffer should be the: *uint8_t *data *field in the*AVFrame
> * structure
>
> my output buffer should be allocated using :
>

> int* av_samples_alloc*(uint8_t **audio_data, int *linesize, int
> nb_channels, int nb_samples, enum AVSampleFormat sample_fmt, int align);
>
> what size should i put in* nb_samples* parameter ?

It depends, I suppose it will be the number of decoded samples.

>
> then i understand i should call :
>
> int *swr_convert*(struct SwrContext *s, uint8_t **out, int
> out_count, const uint8_t **in , int in_count);
>
> and pass* AVFrame data* filed as input buffer & the allocated
> samples buffer as the output buffer.
>
> how do i calculate the* in_count & the out_count*. parameters.

It should be big enough to contain the samples after conversion samples.

Check this code:
http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/155151

[...]

设为首页 | 加入收藏 | 昂纲搜索

All Rights Reserved Powered by 文档下载网

Copyright © 2011
文档下载网内容来自网络,如有侵犯请和我们联系。tousu#anggang.com
返回顶部