Kodi Community Forum
Intel VAAPI howto with Leia v18 nightly based on Ubuntu 18.04 server - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Support (https://forum.kodi.tv/forumdisplay.php?fid=33)
+--- Forum: General Support (https://forum.kodi.tv/forumdisplay.php?fid=111)
+---- Forum: Linux (https://forum.kodi.tv/forumdisplay.php?fid=52)
+---- Thread: Intel VAAPI howto with Leia v18 nightly based on Ubuntu 18.04 server (/showthread.php?tid=231955)



RE: New Era: VAAPI with EGL interoperation - fritsch - 2015-11-06

As you say - the Pi also works - then this one.


RE: New Era: VAAPI with EGL interoperation - m_gl - 2015-11-07

(2015-11-06, 20:37)fritsch Wrote: As you say - the Pi also works - then this one.

http://pastebin.com/9n6yughw
Same file.


RE: New Era: VAAPI with EGL interoperation - FernetMenta - 2015-11-07

omx player is a different story. not even sure if it logs this. what exactly are your visual observations?

btw: the pi has some capability of "resampling" passthrough


RE: New Era: VAAPI with EGL interoperation - ilovethakush - 2015-11-07

What's the difference between VAAPI Motion Compensated and VAAPI Motion Adaptive and which would you suggest for the chromebox?

I know right now on the wiki it says Motion Adaptive but for the longest time it said Motion Compensated. Just wondering.


RE: New Era: VAAPI with EGL interoperation - -DDD- - 2015-11-07

1st Post: Deinterlacing-Method: VAAPI-MCDI or VAAPI-MADI (Baytrail, Sandybridge)


RE: New Era: VAAPI with EGL interoperation - the-dreamer - 2015-11-07

(2015-11-06, 17:52)fritsch Wrote: Btw. this is by chance the same Series - that the-dreamer is watching? :-)

about what series you are talking? in the logs are only serie201.mkv .... do i have missed something?


RE: New Era: VAAPI with EGL interoperation - fritsch - 2015-11-07

I think he renamed the file before - the sample I got from you - looked quite similar codec / audio wise.


RE: New Era: VAAPI with EGL interoperation - the-dreamer - 2015-11-07

i think you confused me with somebody else. i can't remember to send any file to you Big Grin


RE: New Era: VAAPI with EGL interoperation - fritsch - 2015-11-07

Yeah - that's most likely possible.


RE: New Era: VAAPI with EGL interoperation - BigL-New - 2015-11-07

@fritsch

Do you have any reports about problems with intel driver in kernel 4.3.0? On my BeeBox i see in dmesg such messages:

[drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=69823 end=69824)

It's rather unnoticable from Kodi user perspective but i've 6 such messages after 2 days of normal playback.


RE: New Era: VAAPI with EGL interoperation - fritsch - 2015-11-07

Nope - not yet. Please go to bugs.freedesktop.org and file it accordingly if this did not already happen.

Edit: Check if your kernel contains this commit, mentioned in here: https://bugs.freedesktop.org/show_bug.cgi?id=91579
Edit2: You can try drm-intel-nightly before, please: http://kernel.ubuntu.com/~kernel-ppa/mainline/drm-intel-nightly/current/CHANGES


RE: New Era: VAAPI with EGL interoperation - BigL-New - 2015-11-07

Ad. 1 - Yes, my 4.3.0 (vanilla with few OE patches) has this commit.
Ad. 2 - will try :-)


RE: New Era: VAAPI with EGL interoperation - noggin - 2015-11-07

(2015-11-07, 06:22)ilovethakush Wrote: What's the difference between VAAPI Motion Compensated and VAAPI Motion Adaptive and which would you suggest for the chromebox?

I know right now on the wiki it says Motion Adaptive but for the longest time it said Motion Compensated. Just wondering.

This is my understanding :

MADI uses Motion Adaptive techniques. This is where a picture, or elements of a picture, are analysed across fields to detect whether they are static or moving. If they are static then both fields are used to create a frame (i.e. a Weave) - and if they are moving then elements from just one field (i.e. a simple Bob) are used to create a frame (possibly with a bit of filtering to reduce the visibility of the vertical resolution drop and avoid jagged edges) Some deinterlacers use Motion Adaption across the entire field/frame (so switch between all-Bob or all-Weave) whilst others divide the picture into blocks and apply the Bob/Weave decision on a block-by-block basis. I think the Intel approach is the latter. (Some cheap TVs and consumer devices that deinterlace use the former, and you see weave artefacts on shot changes between i and p native content) Effectively with MADI you get the best of both Weave and Bob techniques (either globally across a frame/field or block-by-block within a frame/field).

MCDI uses Motion Compensative techniques. This is where the motion between fields is detected as in MADI and block-based detection is used BUT the motion within blocks between fields is also analysed so a motion vector for each block can be generated, which allows for more than one field to be used to create the output frame even for moving content. The motion vector is detected and allows for the motion to be compensated for, allowing information from both fields to be used to create a frame, even on moving content (you move the content from one field by the motion vectors generated to create the missing field to pair with the field before or after). This is a lot better than the Bob (or Bob with a bit of filtering) that is used on moving content in MADI. It should mean increased vertical resolution on moving content compared to MADI. The quality achieved will depend on the number of fields that are analysed and stored by the deinterlacer, and the motion detection algorithm used.

MADI just needs motion detection, MCDI needs motion detection AND some vector generation algorithm (like block matching) to compensate for the motion. As a result MADI requires less processing than MCDI, so for some very low spec GPUs you can only do MADI.


RE: New Era: VAAPI with EGL interoperation - fritsch - 2015-11-07

Thanks much for the summary. I assume the same.


RE: New Era: VAAPI with EGL interoperation - noggin - 2015-11-07

(2015-11-07, 12:39)fritsch Wrote: Thanks much for the summary. I assume the same.

It's always struck me that the motion vectors in H264 and MPEG2 encoding/decoding would be useful for deinterlacing - yet there never seems to be a way of feeding them from the decoder to the deinterlacer?

I'm old enough to remember the HD-MAC HDTV system - which sent a 576/50i analogue component video signal which had been derived from a 1152/50i HDTV source, and used 1:1, 2:1 and 4:1 interlacing paths on a block-by-block basis BUT also sent the vectors and interlacing technique for each block as digital data in the vertical blanking, to allow a receiver to reconstruct without having to do the motion detecting itself (as the encoder sent the vectors it thought would be best for decoding). The data was carried in VBI and HBI along with digital audio.

H264 and MPEG2 use similar techniques, with the encoder sending the motion vectors rather than the decoder having to detect them, so it seems strange that they can't be fed forward to the deinterlacer?

If the vectors were available, and were useful, they'd reduce the computation requirement of the deinterlacer?