| <chapter id="chapter-queryevents"> |
| <title>Position tracking and seeking</title> |
| |
| <para> |
| So far, we've looked at how to create a pipeline to do media processing |
| and how to make it run. Most application developers will be interested |
| in providing feedback to the user on media progress. Media players, for |
| example, will want to show a slider showing the progress in the song, |
| and usually also a label indicating stream length. Transcoding |
| applications will want to show a progress bar on how much percent of |
| the task is done. &GStreamer; has built-in support for doing all this |
| using a concept known as <emphasis>querying</emphasis>. Since seeking |
| is very similar, it will be discussed here as well. Seeking is done |
| using the concept of <emphasis>events</emphasis>. |
| </para> |
| |
| <sect1 id="section-querying"> |
| <title>Querying: getting the position or length of a stream</title> |
| |
| <para> |
| Querying is defined as requesting a specific stream-property related |
| to progress tracking. This includes getting the length of a stream (if |
| available) or getting the current position. Those stream properties |
| can be retrieved in various formats such as time, audio samples, video |
| frames or bytes. The function most commonly used for this is |
| <function>gst_element_query ()</function>, although some convenience |
| wrappers are provided as well (such as |
| <function>gst_element_query_position ()</function> and |
| <function>gst_element_query_duration ()</function>). You can generally |
| query the pipeline directly, and it'll figure out the internal details |
| for you, like which element to query. |
| </para> |
| |
| <para> |
| Internally, queries will be sent to the sinks, and |
| <quote>dispatched</quote> backwards until one element can handle it; |
| that result will be sent back to the function caller. Usually, that |
| is the demuxer, although with live sources (from a webcam), it is the |
| source itself. |
| </para> |
| |
| <programlisting> |
| <!-- example-begin query.c a --> |
| #include <gst/gst.h> |
| <!-- example-end query.c a --> |
| <!-- example-begin query.c b --><!-- |
| static void |
| my_bus_message_cb (GstBus *bus, |
| GstMessage *message, |
| gpointer data) |
| { |
| GMainLoop *loop = (GMainLoop *) data; |
| |
| switch (GST_MESSAGE_TYPE (message)) { |
| case GST_MESSAGE_ERROR: { |
| GError *err; |
| gchar *debug; |
| |
| gst_message_parse_error (message, &err, &debug); |
| g_print ("Error: %s\n", err->message); |
| g_error_free (err); |
| g_free (debug); |
| |
| g_main_loop_quit (loop); |
| break; |
| } |
| case GST_MESSAGE_EOS: |
| /* end-of-stream */ |
| g_main_loop_quit (loop); |
| break; |
| default: |
| break; |
| } |
| } |
| --> |
| <!-- example-end query.c b --> |
| <!-- example-begin query.c c --> |
| static gboolean |
| cb_print_position (GstElement *pipeline) |
| { |
| GstFormat fmt = GST_FORMAT_TIME; |
| gint64 pos, len; |
| |
| if (gst_element_query_position (pipeline, &fmt, &pos) |
| && gst_element_query_duration (pipeline, &fmt, &len)) { |
| g_print ("Time: %" GST_TIME_FORMAT " / %" GST_TIME_FORMAT "\r", |
| GST_TIME_ARGS (pos), GST_TIME_ARGS (len)); |
| } |
| |
| /* call me again */ |
| return TRUE; |
| } |
| |
| gint |
| main (gint argc, |
| gchar *argv[]) |
| { |
| GstElement *pipeline; |
| <!-- example-end query.c c --> |
| [..]<!-- example-begin query.c d --><!-- |
| GstStateChangeReturn ret; |
| GMainLoop *loop; |
| GError *err = NULL; |
| GstBus *bus; |
| gchar *l; |
| |
| /* init */ |
| gst_init (&argc, &argv); |
| |
| /* args */ |
| if (argc != 2) { |
| g_print ("Usage: %s <filename>\n", argv[0]); |
| return -1; |
| } |
| |
| loop = g_main_loop_new (NULL, FALSE); |
| |
| /* build pipeline, the easy way */ |
| l = g_strdup_printf ("filesrc location=\"%s\" ! oggdemux ! vorbisdec ! " |
| "audioconvert ! audioresample ! alsasink", |
| argv[1]); |
| pipeline = gst_parse_launch (l, &err); |
| if (pipeline == NULL || err != NULL) { |
| g_printerr ("Cannot build pipeline: %s\n", err->message); |
| g_error_free (err); |
| g_free (l); |
| if (pipeline) |
| gst_object_unref (pipeline); |
| return -1; |
| } |
| g_free (l); |
| |
| bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); |
| gst_bus_add_signal_watch (bus); |
| g_signal_connect (bus, "message", G_CALLBACK (my_bus_message_cb), loop); |
| gst_object_unref (bus); |
| |
| /* play */ |
| ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); |
| if (ret == GST_STATE_CHANGE_FAILURE) |
| g_error ("Failed to set pipeline to PLAYING.\n"); |
| --><!-- example-end query.c d --> |
| <!-- example-begin query.c e --> |
| /* run pipeline */ |
| g_timeout_add (200, (GSourceFunc) cb_print_position, pipeline); |
| g_main_loop_run (loop); |
| <!-- example-end query.c e --> |
| [..]<!-- example-begin query.c f --><!-- |
| /* clean up */ |
| gst_element_set_state (pipeline, GST_STATE_NULL); |
| gst_object_unref (GST_OBJECT (pipeline)); |
| |
| return 0; |
| --><!-- example-end query.c f --> |
| <!-- example-begin query.c g --> |
| } |
| <!-- example-end query.c g --></programlisting> |
| </sect1> |
| |
| <sect1 id="section-eventsseek"> |
| <title>Events: seeking (and more)</title> |
| |
| <para> |
| Events work in a very similar way as queries. Dispatching, for |
| example, works exactly the same for events (and also has the same |
| limitations), and they can similarly be sent to the toplevel pipeline |
| and it will figure out everything for you. Although there are more |
| ways in which applications and elements can interact using events, |
| we will only focus on seeking here. This is done using the seek-event. |
| A seek-event contains a playback rate, a seek offset format (which is |
| the unit of the offsets to follow, e.g. time, audio samples, video |
| frames or bytes), optionally a set of seeking-related flags (e.g. |
| whether internal buffers should be flushed), a seek method (which |
| indicates relative to what the offset was given), and seek offsets. |
| The first offset (cur) is the new position to seek to, while |
| the second offset (stop) is optional and specifies a position where |
| streaming is supposed to stop. Usually it is fine to just specify |
| GST_SEEK_TYPE_NONE and -1 as end_method and end offset. The behaviour |
| of a seek is also wrapped in the <function>gst_element_seek ()</function>. |
| </para> |
| |
| <programlisting> |
| static void |
| seek_to_time (GstElement *pipeline, |
| gint64 time_nanoseconds) |
| { |
| if (!gst_element_seek (pipeline, 1.0, GST_FORMAT_TIME, GST_SEEK_FLAG_FLUSH, |
| GST_SEEK_TYPE_SET, time_nanoseconds, |
| GST_SEEK_TYPE_NONE, GST_CLOCK_TIME_NONE)) { |
| g_print ("Seek failed!\n"); |
| } |
| } |
| </programlisting> |
| <para> |
| Seeks should usually be done when the pipeline is in PAUSED or PLAYING |
| state (when it is in PLAYING state the pipeline will pause itself, issue |
| the seek, and then set itself back to PLAYING again itself). |
| returns. |
| </para> |
| |
| <para> |
| It is important to realise that seeks will not happen instantly in the |
| sense that they are finished when the function |
| <function>gst_element_seek ()</function> returns. Depending on the |
| specific elements involved, the actual seeking might be done later in |
| another thread (the streaming thread), and it might take a short time |
| until buffers from the new seek position will reach downstream elements |
| such as sinks (if the seek was non-flushing then it might take a bit |
| longer). |
| </para> |
| |
| <para> |
| It is possible to do multiple seeks in short time-intervals, such as |
| a direct response to slider movement. After a seek, internally, the |
| pipeline will be paused (if it was playing), the position will be |
| re-set internally, the demuxers and decoders will decode from the new |
| position onwards and this will continue until all sinks have data |
| again. If it was playing originally, it will be set to playing again, |
| too. Since the new position is immediately available in a video output, |
| you will see the new frame, even if your pipeline is not in the playing |
| state. |
| </para> |
| </sect1> |
| </chapter> |
| |