8000 Is "Update only the tiles that change (currently CPU-only)" really desired for the GPU? · Issue #33 · google/forma · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
This repository was archived by the owner on Jul 18, 2024. It is now read-only.
This repository was archived by the owner on Jul 18, 2024. It is now read-only.
Is "Update only the tiles that change (currently CPU-only)" really desired for the GPU? #33
Open
@danielzgtg

Description

@danielzgtg

There is "Update only the tiles that change (currently CPU-only)" in the README. That suggests that it is being considered that this should be ported to the GPU and such a port is planned. I question this because it goes against my entire mental model of GPU performance.

Supporting this idea is the convention for modern rendering software to just repaint everything. Every 3D game redraws the whole world onto a new frame, throwing away the previous one. Desktop environments and windowing systems redraw the whole screen to support modern effects and decorations. With their introduction of compositing, their equivalent of "update only the tiles that change" called "damage" is gone as every application gets their own buffer and modern software does not need to consider this anymore. I read how modern GPUs are actually faster when data is not reused for partial repaints. One form of reuse slowness is trying to draw something then discarding, our "update only the tiles that change" would be equivalent to discarding most of the screen. From Asahi Linux's GPU blog, it is stated that it is expensive to read framebuffer data back for reuse compared to rerendering and overwriting it. Then there is the problem of trying to render the next frame on the GPU right after the last one is finished without waiting for the CPU-GPU round-trip synchronization time. These all support the idea that GPUs like data pushed through without the latency of data dependencies, and reuse will only impair parallel processing to slow down the GPU with dependencies.

What might save this idea is battery consumption. Firefox goes through the trouble of using private APIs on macOS so that it reuses unchanged frame data. On the README however I see that the GIFs are about gaming so I don't know how much battery matters compared to Firefox's text use case.

So I would like people to teach me, what are your considerations, viewpoints, and experiences for GPU framebuffer reuse?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0