drm/vblank: Add vblank works
Add some kind of vblank workers. The interface is similar to regular
delayed works, and is mostly based off kthread_work. It allows for
scheduling delayed works that execute once a particular vblank sequence
has passed. It also allows for accurate flushing of scheduled vblank
works - in that flushing waits for both the vblank sequence and job
execution to complete, or for the work to get cancelled - whichever
comes first.
Whatever hardware programming we do in the work must be fast (must at
least complete during the vblank or scanout period, sometimes during the
first few scanlines of the vblank). As such we use a high-priority
per-CRTC thread to accomplish this.
Changes since v7:
* Stuff drm_vblank_internal.h and drm_vblank_work_internal.h contents
into drm_internal.h
* Get rid of unnecessary spinlock in drm_crtc_vblank_on()
* Remove !vblank->worker check
* Grab vbl_lock in drm_vblank_work_schedule()
* Mention self-rearming work items in drm_vblank_work_schedule() kdocs
* Return 1 from drm_vblank_work_schedule() if the work was scheduled
successfully, 0 or error code otherwise
* Use drm_dbg_core() instead of DRM_DEV_ERROR() in
drm_vblank_work_schedule()
* Remove vblank->worker checks in drm_vblank_destroy_worker() and
drm_vblank_flush_worker()
Changes since v6:
* Get rid of ->pending and seqcounts, and implement flushing through
simpler means - danvet
* Get rid of work_lock, just use drm_device->event_lock
* Move drm_vblank_work item cleanup into drm_crtc_vblank_off() so that
we ensure that all vblank work has finished before disabling vblanks
* Add checks into drm_crtc_vblank_reset() so we yell if it gets called
while there's vblank workers active
* Grab event_lock in both drm_crtc_vblank_on()/drm_crtc_vblank_off(),
the main reason for this is so that other threads calling
drm_vblank_work_schedule() are blocked from attempting to schedule
while we're in the middle of enabling/disabling vblanks.
* Move drm_handle_vblank_works() call below drm_handle_vblank_events()
* Simplify drm_vblank_work_cancel_sync()
* Fix drm_vblank_work_cancel_sync() documentation
* Move wake_up_all() calls out of spinlock where we can. The only one I
left was the call to wake_up_all() in drm_vblank_handle_works() as
this seemed like it made more sense just living in that function
(which is all technically under lock)
* Move drm_vblank_work related functions into their own source files
* Add drm_vblank_internal.h so we can export some functions we don't
want drivers using, but that we do need to use in drm_vblank_work.c
* Add a bunch of documentation
Changes since v4:
* Get rid of kthread interfaces we tried adding and move all of the
locking into drm_vblank.c. For implementing drm_vblank_work_flush(),
we now use a wait_queue and sequence counters in order to
differentiate between multiple work item executions.
* Get rid of drm_vblank_work_cancel() - this would have been pretty
difficult to actually reimplement and it occurred to me that neither
nouveau or i915 are even planning to use this function. Since there's
also no async cancel function for most of the work interfaces in the
kernel, it seems a bit unnecessary anyway.
* Get rid of to_drm_vblank_work() since we now are also able to just
pass the struct drm_vblank_work to work item callbacks anyway
Changes since v3:
* Use our own spinlocks, don't integrate so tightly with kthread_works
Changes since v2:
* Use kthread_workers instead of reinventing the wheel.
Cc: Tejun Heo <tj@kernel.org>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Co-developed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200627194657.156514-4-lyude@redhat.com
2020-04-17 21:33:13 +02:00
|
|
|
/* SPDX-License-Identifier: MIT */
|
|
|
|
|
|
|
|
#ifndef _DRM_VBLANK_WORK_H_
|
|
|
|
#define _DRM_VBLANK_WORK_H_
|
|
|
|
|
|
|
|
#include <linux/kthread.h>
|
|
|
|
|
|
|
|
struct drm_crtc;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* struct drm_vblank_work - A delayed work item which delays until a target
|
|
|
|
* vblank passes, and then executes at realtime priority outside of IRQ
|
|
|
|
* context.
|
|
|
|
*
|
|
|
|
* See also:
|
|
|
|
* drm_vblank_work_schedule()
|
|
|
|
* drm_vblank_work_init()
|
|
|
|
* drm_vblank_work_cancel_sync()
|
|
|
|
* drm_vblank_work_flush()
|
2024-05-22 07:33:39 +02:00
|
|
|
* drm_vblank_work_flush_all()
|
drm/vblank: Add vblank works
Add some kind of vblank workers. The interface is similar to regular
delayed works, and is mostly based off kthread_work. It allows for
scheduling delayed works that execute once a particular vblank sequence
has passed. It also allows for accurate flushing of scheduled vblank
works - in that flushing waits for both the vblank sequence and job
execution to complete, or for the work to get cancelled - whichever
comes first.
Whatever hardware programming we do in the work must be fast (must at
least complete during the vblank or scanout period, sometimes during the
first few scanlines of the vblank). As such we use a high-priority
per-CRTC thread to accomplish this.
Changes since v7:
* Stuff drm_vblank_internal.h and drm_vblank_work_internal.h contents
into drm_internal.h
* Get rid of unnecessary spinlock in drm_crtc_vblank_on()
* Remove !vblank->worker check
* Grab vbl_lock in drm_vblank_work_schedule()
* Mention self-rearming work items in drm_vblank_work_schedule() kdocs
* Return 1 from drm_vblank_work_schedule() if the work was scheduled
successfully, 0 or error code otherwise
* Use drm_dbg_core() instead of DRM_DEV_ERROR() in
drm_vblank_work_schedule()
* Remove vblank->worker checks in drm_vblank_destroy_worker() and
drm_vblank_flush_worker()
Changes since v6:
* Get rid of ->pending and seqcounts, and implement flushing through
simpler means - danvet
* Get rid of work_lock, just use drm_device->event_lock
* Move drm_vblank_work item cleanup into drm_crtc_vblank_off() so that
we ensure that all vblank work has finished before disabling vblanks
* Add checks into drm_crtc_vblank_reset() so we yell if it gets called
while there's vblank workers active
* Grab event_lock in both drm_crtc_vblank_on()/drm_crtc_vblank_off(),
the main reason for this is so that other threads calling
drm_vblank_work_schedule() are blocked from attempting to schedule
while we're in the middle of enabling/disabling vblanks.
* Move drm_handle_vblank_works() call below drm_handle_vblank_events()
* Simplify drm_vblank_work_cancel_sync()
* Fix drm_vblank_work_cancel_sync() documentation
* Move wake_up_all() calls out of spinlock where we can. The only one I
left was the call to wake_up_all() in drm_vblank_handle_works() as
this seemed like it made more sense just living in that function
(which is all technically under lock)
* Move drm_vblank_work related functions into their own source files
* Add drm_vblank_internal.h so we can export some functions we don't
want drivers using, but that we do need to use in drm_vblank_work.c
* Add a bunch of documentation
Changes since v4:
* Get rid of kthread interfaces we tried adding and move all of the
locking into drm_vblank.c. For implementing drm_vblank_work_flush(),
we now use a wait_queue and sequence counters in order to
differentiate between multiple work item executions.
* Get rid of drm_vblank_work_cancel() - this would have been pretty
difficult to actually reimplement and it occurred to me that neither
nouveau or i915 are even planning to use this function. Since there's
also no async cancel function for most of the work interfaces in the
kernel, it seems a bit unnecessary anyway.
* Get rid of to_drm_vblank_work() since we now are also able to just
pass the struct drm_vblank_work to work item callbacks anyway
Changes since v3:
* Use our own spinlocks, don't integrate so tightly with kthread_works
Changes since v2:
* Use kthread_workers instead of reinventing the wheel.
Cc: Tejun Heo <tj@kernel.org>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Co-developed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200627194657.156514-4-lyude@redhat.com
2020-04-17 21:33:13 +02:00
|
|
|
*/
|
|
|
|
struct drm_vblank_work {
|
|
|
|
/**
|
|
|
|
* @base: The base &kthread_work item which will be executed by
|
|
|
|
* &drm_vblank_crtc.worker. Drivers should not interact with this
|
|
|
|
* directly, and instead rely on drm_vblank_work_init() to initialize
|
|
|
|
* this.
|
|
|
|
*/
|
|
|
|
struct kthread_work base;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @vblank: A pointer to &drm_vblank_crtc this work item belongs to.
|
|
|
|
*/
|
|
|
|
struct drm_vblank_crtc *vblank;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @count: The target vblank this work will execute on. Drivers should
|
|
|
|
* not modify this value directly, and instead use
|
|
|
|
* drm_vblank_work_schedule()
|
|
|
|
*/
|
|
|
|
u64 count;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @cancelling: The number of drm_vblank_work_cancel_sync() calls that
|
|
|
|
* are currently running. A work item cannot be rescheduled until all
|
|
|
|
* calls have finished.
|
|
|
|
*/
|
|
|
|
int cancelling;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @node: The position of this work item in
|
|
|
|
* &drm_vblank_crtc.pending_work.
|
|
|
|
*/
|
|
|
|
struct list_head node;
|
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* to_drm_vblank_work - Retrieve the respective &drm_vblank_work item from a
|
|
|
|
* &kthread_work
|
|
|
|
* @_work: The &kthread_work embedded inside a &drm_vblank_work
|
|
|
|
*/
|
|
|
|
#define to_drm_vblank_work(_work) \
|
|
|
|
container_of((_work), struct drm_vblank_work, base)
|
|
|
|
|
|
|
|
int drm_vblank_work_schedule(struct drm_vblank_work *work,
|
|
|
|
u64 count, bool nextonmiss);
|
|
|
|
void drm_vblank_work_init(struct drm_vblank_work *work, struct drm_crtc *crtc,
|
|
|
|
void (*func)(struct kthread_work *work));
|
|
|
|
bool drm_vblank_work_cancel_sync(struct drm_vblank_work *work);
|
|
|
|
void drm_vblank_work_flush(struct drm_vblank_work *work);
|
2024-05-22 07:33:39 +02:00
|
|
|
void drm_vblank_work_flush_all(struct drm_crtc *crtc);
|
drm/vblank: Add vblank works
Add some kind of vblank workers. The interface is similar to regular
delayed works, and is mostly based off kthread_work. It allows for
scheduling delayed works that execute once a particular vblank sequence
has passed. It also allows for accurate flushing of scheduled vblank
works - in that flushing waits for both the vblank sequence and job
execution to complete, or for the work to get cancelled - whichever
comes first.
Whatever hardware programming we do in the work must be fast (must at
least complete during the vblank or scanout period, sometimes during the
first few scanlines of the vblank). As such we use a high-priority
per-CRTC thread to accomplish this.
Changes since v7:
* Stuff drm_vblank_internal.h and drm_vblank_work_internal.h contents
into drm_internal.h
* Get rid of unnecessary spinlock in drm_crtc_vblank_on()
* Remove !vblank->worker check
* Grab vbl_lock in drm_vblank_work_schedule()
* Mention self-rearming work items in drm_vblank_work_schedule() kdocs
* Return 1 from drm_vblank_work_schedule() if the work was scheduled
successfully, 0 or error code otherwise
* Use drm_dbg_core() instead of DRM_DEV_ERROR() in
drm_vblank_work_schedule()
* Remove vblank->worker checks in drm_vblank_destroy_worker() and
drm_vblank_flush_worker()
Changes since v6:
* Get rid of ->pending and seqcounts, and implement flushing through
simpler means - danvet
* Get rid of work_lock, just use drm_device->event_lock
* Move drm_vblank_work item cleanup into drm_crtc_vblank_off() so that
we ensure that all vblank work has finished before disabling vblanks
* Add checks into drm_crtc_vblank_reset() so we yell if it gets called
while there's vblank workers active
* Grab event_lock in both drm_crtc_vblank_on()/drm_crtc_vblank_off(),
the main reason for this is so that other threads calling
drm_vblank_work_schedule() are blocked from attempting to schedule
while we're in the middle of enabling/disabling vblanks.
* Move drm_handle_vblank_works() call below drm_handle_vblank_events()
* Simplify drm_vblank_work_cancel_sync()
* Fix drm_vblank_work_cancel_sync() documentation
* Move wake_up_all() calls out of spinlock where we can. The only one I
left was the call to wake_up_all() in drm_vblank_handle_works() as
this seemed like it made more sense just living in that function
(which is all technically under lock)
* Move drm_vblank_work related functions into their own source files
* Add drm_vblank_internal.h so we can export some functions we don't
want drivers using, but that we do need to use in drm_vblank_work.c
* Add a bunch of documentation
Changes since v4:
* Get rid of kthread interfaces we tried adding and move all of the
locking into drm_vblank.c. For implementing drm_vblank_work_flush(),
we now use a wait_queue and sequence counters in order to
differentiate between multiple work item executions.
* Get rid of drm_vblank_work_cancel() - this would have been pretty
difficult to actually reimplement and it occurred to me that neither
nouveau or i915 are even planning to use this function. Since there's
also no async cancel function for most of the work interfaces in the
kernel, it seems a bit unnecessary anyway.
* Get rid of to_drm_vblank_work() since we now are also able to just
pass the struct drm_vblank_work to work item callbacks anyway
Changes since v3:
* Use our own spinlocks, don't integrate so tightly with kthread_works
Changes since v2:
* Use kthread_workers instead of reinventing the wheel.
Cc: Tejun Heo <tj@kernel.org>
Cc: dri-devel@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Co-developed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200627194657.156514-4-lyude@redhat.com
2020-04-17 21:33:13 +02:00
|
|
|
|
|
|
|
#endif /* !_DRM_VBLANK_WORK_H_ */
|