Vulkan: Migrate buffers between memory types to improve GPU performance (#4540)
* Initial implementation of migration between memory heaps - Missing OOM handling - Missing `_map` data safety when remapping - Copy may not have completed yet (needs some kind of fence) - Map may be unmapped before it is done being used. (needs scoped access) - SSBO accesses are all "writes" - maybe pass info in another way. - Missing keeping map type when resizing buffers (should this be done?) * Ensure migrated data is in place before flushing. * Fix issue where old waitable would be signalled. - There is a real issue where existing Auto<> references need to be replaced. * Swap bound Auto<> instances when swapping buffer backing * Fix conversion buffers * Don't try move buffers if the host has shared memory. * Make GPU methods return PinnedSpan with scope * Storage Hint * Fix stupidity * Fix rebase * Tweak rules Attempt to sidestep BOTW slowdown * Remove line * Migrate only when command buffers flush * Change backing swap log to debug * Address some feedback * Disallow backing swap when the flush lock is held by the current thread * Make PinnedSpan from ReadOnlySpan explicitly unsafe * Fix some small issues - Index buffer swap fixed - Allocate DeviceLocal buffers using a separate block list to images. * Remove alternative flags * Address feedback
This commit is contained in:
parent
67b4e63cff
commit
9f1cf6458c
35 changed files with 660 additions and 167 deletions
|
@ -18,9 +18,9 @@ namespace Ryujinx.Graphics.GAL.Multithreading.Commands.Texture
|
|||
|
||||
public static void Run(ref TextureGetDataCommand command, ThreadedRenderer threaded, IRenderer renderer)
|
||||
{
|
||||
ReadOnlySpan<byte> result = command._texture.Get(threaded).Base.GetData();
|
||||
PinnedSpan<byte> result = command._texture.Get(threaded).Base.GetData();
|
||||
|
||||
command._result.Get(threaded).Result = new PinnedSpan<byte>(result);
|
||||
command._result.Get(threaded).Result = result;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue