2012年3月26日星期一

Christian Louboutin ShoesUsing ffmpeg to doimage of the pixel format conversion are you with libs

Using ffmpeg to doimage of the pixel format conversion are you with libswscale ?Hey ,out! Ffmpeg has something new :libavfilter. To use it ,can completely replace the libswscale ,and can automatically accomplish some complex conversion to operate it .
Libavfilter ah ,well spent! But is too complex... If you just do the image pixel format processing ,libswscale is fairly simple ,can have a look of the latest ffplay.c code in the if CONFIG_AVFILTER ENDIF ,was # # surrounded by a code amount is very large, but also let a person look confused ,but in order to catch the tide ,we still have to learn it .
.. The first clear avfilter several related concepts ( Note: if no directShow based on the students cannot read the following explanation ,please learn the basic concepts of DirectShow ) :1 AVFilterGraph: almost identical with directShow fitlerGraph ,represents a string of connected filter.
AVFilter represents a filter. AVFilterPad represents a filter input or output port ,equivalent to the DShow Pin. Only the output of pad filter called source ,only the input pad tilter called sink.
AVFilterLink represents two connection between the fitler Bonded complexes. But overall look ,libavfitler and DShow are almost the same .Take a look at how AVFilter is used, we take ffplay.
c as an example ,analyze the AVFilter code. 1 generates a graph: AVFilterGraph * graph = avfilter_graph_alloc ( ) ;2 to create source AVFilterContext * filt_src ;( & ;filt_src ,avfilter_graph_create_filter & ;" ;src" ;input_filter ,NULL ,is ,,graph ) ;the first parameter is generated filter ( a source ) ,the second parameter is a AVFilter example of the structure ,the third parameter is to create a fitler name ,the fourth parameter is not know what ,fifth parameters user data ( the caller private data ) ,Nike Factory Outlet,the sixth parameter is the graph pointer.
The second parameter instance must by the caller in their implementation, can be sent to the graph frame .3 to create sink AVFilterContext * filt_out ;RET = avfilter_graph_create_filter ( & ;filt_out ,avfilter_get_by_name ( " ;buffersink" ;" ;out" ;,) ,NULL ,pix_fmts ,Christian Louboutin Shoes,graph ) ;parameters above, do not explain .
To create this sink is a buffersink ,can refer to the libavfitler source Code file sink_buffer.c have a look it what is a . Sink_buffer is actually a buffer output frame by sink ,of course, its output is not through the pad ,because it back without fitler .
Use it as a sink ,can make use of this graph code easy to obtain graph processed frame 4 connecting source and sink avfilter_link .( filt_src ,0 ,filt_out ,0) ;the first parameter is connected at the front of the filter ,the second parameter is the former fitler to connect the pad serial number ,the third parameter is the filter behind ,the fourth parameter is filter to connect the pad.
4 graph to do a final check ( graph avfilter_graph_config ,NULL ) ;we are out from sink processing finished frame ,so it is best to take a reference to sink preserved, for example :AVFilterContext * out_video_filter = filt_out ;6 input_filter because input_filter is source ,Vibram Five Fingers Shoes,so output pad assigned to it ,and only one pad .
Static AVFilter input_filter = { name = " ;ffplay_input" ;,. Priv_size = sizeof ( FilterPriv ) ,init = input_init ,uninit = input_uninit ,query_formats = input_query_formats ,The.
Inputs = ( AVFilterPad ) { { name NULL } } = = = = = = = = = = = = ,. Outputs ( AVFilterPad ) { { name = " ;default" ;type = AVMEDIA_TYPE_VIDEO ,input_request_frame ,.
Request_frame ,. Config_props = input_config_props ,} ,{ name = NULL } } ,} ;then realizes AVFilter callback function :init ( ) and uninit ( ) - used to initialize / destruction of resources used .
See ffplay.c static int input_init :the implementation ( AVFilterContext * CTX ,const char * args ,void * opaque ) {FilterPriv * priv = ctx-> ;priv ;AVCodecContext codec ;if ( opaque !) return - 1 ;priv-> ;is = opaque ;codec = priv-> ;is-> ;video_st-> ;codec ;codec-> ;opaque = CTX ;if ( ( codec-> ;codec-> ;capabilities & ;CODEC_CAP_DR1 ) ) {av_assert0 ( codec-> ;flags ;priv-> ;& ;CODEC_FLAG_EMU_EDGE ) use_dr1 = 1 ;codec-> ;get_buffer = input_get_buffer ;codec-> ;release_buffer = inpu T_release_buffer ;codec-> ;reget_buffer = input_reget_buffer ;codec-> ;thread_safe_callbacks = 1 ;} priv-> ;frame = avcodec_alloc_frame ( ) ;return 0 ;} FilterPriv ffplay filter (input_filter ) of the private data structure .
The main work is assigned a AVFrame ,used to preserve from the equipment frame . Uninit ( ) more simply ,Beats Headphones,do not see .Need to realize output pad request_frame ( ) ,to make input_filter back filter access to static int input_request_frame frame ( AVFilterLink * link ) {FilterPriv * priv = link-> ;src-> ;priv ;AVFilterBufferRef picref ;int64_t PTS = 0 ;AVPacket pkt ;int ret ;while ( !( RET = get_video_frame ( priv-> ;is ,priv-> ;frame ,& ;PTS ,& ;pkt ) ) ( & av_free_packet ) ;pkt if ( ) ;RET < ;return 1 ;if 0 ) ( priv-> ;use_dr1 & ;& ;priv-> ;frame-> ;opaque ) {picref = avfilter_ref_buffer ( priv-> ;frame-> ;opaque ,0 ) ;} else {picref = avfilter_get_video_buffer ( link ,AV_PERM_WRITE ,link-> ;W ,link-> ;H ) ;Av_image_copy ( picref-> ;data ,picref-> ;linesize ,priv-> ;frame-> ;data ,priv-> ;frame-> ;linesize ,Moncler Online,picref-> ;format ,link-> ;W ,link-> ;H ) ;} av_free_packet ( & ;pkt ) ;avfilter_copy_frame_props ( priv-> ;picref ,frame ) ;picref-> ;PTS = PTS ;avfilter_start_frame ( link ,picref ) ( link ,0 ;avfilter_draw_slice ,link-> ;h ,1) ;avfilter_end_frame ( link ) ;return 0 ;} the caller from the sink to obtain a processed frame :av_buffersink_get_buffer_ref ( filt_out ,& ;picref ,0) ;acquired frame stored in picref.
This function will cause the graph filter from back to front is called the last filter outpad request_frame ( ) ,last call to source request_frame ( ) ,input_request_frame ( ) ,input_request_frame is called get_video_frame ( ) ( ) ( see ffplay.
c ) from the device ( may need access to a frame decoding ) ,and then the frame data is copied to picref ,filter processing the frame is indicated by AVFilterBufferRef .Then the frame some attributes are also copied to picref avfilter_start_fr ,the last call Ame ( link ,picref ) ;avfilter_draw_slice ( link ,0 ,link-> ;h ,1) ;avfilter_end_frame ( link ) ;to deal with the frame .
The three function corresponding to a pad on a three pointer to function :start_frame ,draw_slice ,end_frame. Start_frame as an example ,the process is like this: first source start_frame was called, do some necessary process, then the call is connected to the source filter start_frame filter output pad each are responsible for this function is passed down the call.
When the sink call start_frame ( ) again layer upon layer to return to the source output pad. When these three functions are by source output pad after the call is completed ,the frame of the final results will come out .
Then you can use the sink is obtained. Compared with DShow ,avfilter without the push mode ,pull mode concept ,not at the source output pad to achieve thread ,the graph operation is driven by the caller .
Related articles:

没有评论:

发表评论