I don’t think there would be any real benefit to this over DXVK and VKD3D
The main use case of this is in porting. So if someone wanted to make a native port of their game, this library would make it potentially much easier.
But why this instead of DXVK or VKD3D? Those can just as easily be integrated.
Both use wine iirc, Op is talking about applications written directly for Linux.
Edit: im wrong
Wine uses VKD3D and DXVK, not the other way around. People have even used DXVK on Windows to improve performance in certain situations.
What, DX to Vulkan translation can be faster on Windows than directly DX? How does that work?
There is no such thing as “directly” DX. The drivers of the major GPU vendors on Windows must also implement DX ontop of their internal abstractions over the hardware.
While Vulkan will theoretically always have more “overhead” compared to using the hardware directly in the best possible manner, the latter isn’t even close to being done anywhere as it’s not feasible.
Therefore, situations where a driver implemented atop of VK being faster than a “native” driver are absolutely possible, though not yet common. Other real-world scenarios include Mesa’s Zink atop of AMD’s Windows VK driver being much better than AMD’s “native” OpenGL driver, leading to a dev studio of an aircraft sim shipping it in a real game.
Me reading this comment chain:
leading to a dev studio of an aircraft sim shipping it in a real game.
Is it X-Plane?
IIRC the main DXVK dev does this for debugging purposes.
As to why it might be faster, it depends on the DX implementation and what it’s being transformed into. If the original DX implementation, especially pre-DX12, is wasteful in terms of instructions, and DXVK knows of a better way to do the exact same thing in Vulkan, it can potentially offset the translation costs.
The APIs aren’t wildly different so it’s not so much translation but an implementation of the DirectX API. Some GPU vendors have better Vulkan drivers than DX (Intel) which may give performance improvements.
Besides speed, it’s also really useful for older games with unstable graphics renderers that don’t play nice with modern hardware. When I was still on Windows, I used DXVK on Fallout: New Vegas and Driver: Parallel Lines, and they decreased crashes by a LOT compared to when they ran on native DX9.
In terms of speed, obviously I didn’t notice much of a difference with D:PL since it’s a 2006 game that’s not demanding at all, but I did notice F:NV seemed to also run better and less laggy in general (not only is FNV poorly-optimized, but I also use a lot of graphics mods for it).
I used DXVK for Dragon’s Dogma on Windows because it ran better overall, vs Directx 9 which the game uses natively.
This was on an AMD Rx 6800 xt
The README does not say which DirectX version they are targeting. The screenshot show “DirectX 0”. Looking at the code, I see a directory called “d3d9”, but those files are mostly empty.
So yeah… nothing to see here. Maybe in 5 or 10 years.
Here’s the roadmap: https://github.com/EduApps-CDG/OpenDX/discussions/10
TL;DR: they’re targeting DX9 initially, later expanding to include DX12.
And the bit saying DxDiag opens faster feels really strange…
Too late, I already hyped it on Mastodon.
But yeah, seems like a glimmer in the postman’s eyes at the moment.
That repo is just pure trolling, read the “Improved performance” section and open some source files and you’ll understand why.
mhhhhhhhhhh
The project is trolling fs
Why is this not being developed inside Mesa? There’s even precedent for it; gallium9.
Because DirectX apps typically do not only call into DirectX but also the win32 API, since DirectX has historically been a windows-only API. Merging this into mesa would only bloat mesa while not really offering support for many applications at all.
This is a great project in general, but it’s quite overshadowed by DXVK which does the same except it translates DX calls to vulkan ones and has excellent success rates in proton and derivatives. I guess this is mildly useful for systems that don’t support vulkan but want to run DX apps in raw wine or simply for people who wish not to use DXVK - competition is good for the ecosystem.
Merging this into mesa would only bloat mesa while not really offering support for many applications at all.
But there already is a d3d9 driver inside mesa?
Imagine all the work we wouldn’t have to re-do if we had just done it right the first time.
Stallman was right, as usual.
This is barely explained and the readme gave me more questions than answers.
I immediately thought it’s going to be a library for Wine to use instead of DXVK/VKD3D.
If that’s only for developers to build Linux ports, very little to no real-world use is expected, unless it’s somehow can offer effortless conversions. Even then developers are likely to prefer relying on Proton/Wine to simply have single binary for both platforms, rather than maintaining them separately.
I wonder how much work it will take for drivers to support the API… Or maybe it won’t need anything in Mesa and will somehow work directly on DRM with strictly platform-agnostic code if that’s possible?
Offering better performance than the likes of DXVK is brave to put it mildly. In many scenarios it can already match or surpass native Windows performance even when running Windows binaries.
This is barely explained and the readme gave me more questions than answers.
make a pull request to change the readme then
Noob here, but can someone explain to me what’s the advantage of DirectX vs Vulkan, apart from being around for longer? And why do more developers embrace Vulkan for better portability?
OpenGL is actually older. Microsoft just spent a lot of time and money on DX adoption.
Overall, it’s the native API of Windows and that has the largest user base. On the other hand, many non-game professional apps use OpenGL/Vulkan
Also a noob, but from what I understand, Vulkan is more low-level.
Also a noob, but I think Microsoft improved low-level access in recent DX versions
This is correct, while OpenGL and DirectX 11 and before are considered high level APIs, Vulkan and DirectX 12 are both considered low level APIs.
Does this make it harder to implement?
Printing a gradient triangle using C, in OpenGL, takes about a few 100-130 lines - it could be lesser, I think. In Vulkan, it takes about a thousand lines.
Source: I wrote a “simple” gradient triangle in Vulkan, using C during my free time. Created the gradient triangle in C as a part of my university coursework.
It takes 75 lines to draw a blank window. It takes like three in CoreAnimation in macOS. We really need an OSS take on CoreAnimation but I’m also fine leaving the graphics work to a game engine.
Lower level means you have more control over the small details. However, that also means that you have to reimplement some things from scratch, while higher level frameworks do those things for you.
^ this is the key
There were two major problems with OpenGL:
- It was originally designed and intended as a professional software (high-level) 3D CAD API; not gaming
- Extensive changes to the API were constantly being submitted by different vendors (AMD (ATI), Nvidia, Microsoft, etc) to enhance its performance on their respective hardware in their respective situations.
This meant that almost every API change that was submitted by any one vendor was immediately scrutinized as whether it was for gaming or 3D CAD, and usually disliked for adding bloat that the other vendors didn’t need or worse causing hardware conflicts which often lead to degradation in performance for the other vendors.
This is exactly why Nvidia bundles their own version of OpenGL with their drivers; they can make the changes immediately and release to see the impact of the API changes without approval and if it does well-enough then submit. At the end of the day though, some submissions are accepted and others are not which means Nvidia then has to maintain the changes on their own… so there is benefit to getting the API changes accepted.
Microsoft actually blazed the path that Nvidia took; Windows use to (might still… not sure) ship with its own version of OpenGL binaries, but they disliked having to maintain the changes and fight for acceptance enough that they decided to eventually develop DirectX (among other desires to access input and audio, etc).
DirectX 3D and Vulkan (based on AMD’s Mantle which was inspired by DirectX 12 3D) do not have these issues because both are low-level APIs which means that most of the code that would be specific to the GPU or AMD (ATI), Nvidia, etc is not hard-coded like OpenGL on the driver side… it is done by the application.
I think you are confused about the difference between the opengl spec and an actual implementation of the spec, and who is responsible for shipping what.
- Nvidia ships their own opengl implementation with their drivers, because that’s what a driver is.
- Microsoft doesn’t ship “opengl binaries”, they don’t have any hardware. Maybe you mean they published their own fork of the ogl spec before giving up and making DX? That may be true.
- Mantle predates DX12, both vulkan and dx12 took inspiration from it, not the other way around.
- There are two interpretations being thrown around for “low level”:
- The more traditional meaning is “how far are you from natively talking to hardware?” which is not determined by the rendering API, but the specific implementation. Ex. Nvidia’s dx9 driver is equally “low level” as their DX12 driver, in that the API calls you make are 1 step away from sending commands directly to GPU hardware. Meanwhile, using DX12 via DXVK would be 2 steps away from hardware, which is “higher level” than just using Nvidia’s DX9 implementation directly. Again, “level” is not determined by the API.
- the other interpretation is what I would call “granularity” or “terse-ness” of the API, i.e. how much control over the hardware does it expose. In this case, yes, dx12 and vulkan give finer control over the hardware vs dx9 and ogl.
- your last statement…doesn’t make sense, I don’t understand it. Maybe you’re trying to say that DX12/VK are made to be thinner, with less internal state tracking and less overhead per call, and therefore now all that state tracking is the app’s responsibility? Yes, that is true. But I wouldn’t say that code is “specific to a GPU”.
Nvidia ships their own opengl implementation with their drivers, because that’s what a driver is.
Including OpenGL does not a driver make… ie. Nvidia doesn’t have to ship their own implementation of OpenGL. They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa; however, they choose not to do so because of the reasons I outlined among others.
Microsoft doesn’t ship “opengl binaries”, they don’t have any hardware.
There was a time they did, yes, before Direct X existed
Maybe you mean they published their own fork of the ogl spec before giving up and making DX? That may be true.
No, they made their own contributions to the spec to improve Windows game performance, but didn’t publish their own spec; however they did implement the upstream spec with their contributions and ship them integrated into Windows. This was practically over with by 1995 when Direct X was introduced, so a very long time ago
Mantle predates DX12, both vulkan and dx12 took inspiration from it, not the other way around.
Yes and No… DirectX 3D was always low-level; its why DirectX (among being a one-stop shop) worked so well for XBox, etc. So, AMD got the idea for Mantle from MS Direct X and when AMD met with Khronos to spin off Vulkan, MS took notice that their implementation was not as low-level as Direct X 11 and they actually made Direct X 12 less low-level dependent.
Ex. Nvidia’s dx9 driver is equally “low level” as their DX12 driver
No its not, see above… Direct X 9 is actually much lower level than 12; however, Direct X 12 has many more requirements for certain tech that games today see as necessary that Direct X 9 didn’t
dx12 and vulkan give finer control over the hardware vs dx9 and ogl.
yes and no… depends on the particular portion of the spec you are talking about. For example, Direct X 9 had much more lower leve control of the CPU, but as time moved on and less CPU reliance became a thing, DirectX 12 has less control of the CPU but more control of the GPU.
So, here’s the thing, I don’t consider myself an expert in many things, but this subject is literally my day job, and it’s possibly the only thing I do consider myself an expert in. And I’m telling you, you are confused and I would gladly help clear it up if you’ll allow me.
They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa
Nvidia’s OGL driver is a driver. Mesa’s radv backend is a driver. Nouveau, the open source Nvidia meds backend is a driver. An opengl implementation does a driver make.
There was a time they did, yes
What GPU did Microsoft’s driver target? Or are you referring to a software implementation?
Yes and No… DirectX 3D was always low-level
You literally said that Mantle was inspired by DX12, which is false. You can try to pivot to regurgitating more Mantle history, but I’m just saying…
No its not, see above…
Yes, it is, see above my disambiguation of the term “low-level”. The entire programming community has always used the term to refer to how far “above the metal” you are, not how granular an API is. The first party DX9 and DX12 drivers are equally “low-level”, take it from someone who literally wrote them for a living. The APIs themselves function very differently to give finer control over the API, and many news outlets and forums full of confused information (like this one) like to infer that that means it’s “lower level”.
Your last statement doesn’t make sense, so I don’t know how to correct it.
I think it’s more about portability and making it easier for windows devs to support Linux for their games
Asides from “ew installing Winblows stuff in my distro ewwww” that will be a gamechanger if they do it right.
How would a native implementation be better than DXVK? Wouldn’t develops still need to port the rest of their app to Linux to use it? At that point, you could still just include DXVK, would the performance really be that much worse?
Native vulkan or opengl games doesn’t need to translate thees calls, if directx could run. Natively on Linux, it wouldn’t have to be translated
Afaik the only way to avoid translating into OpenGL and Vulkan would be to write native drivers. Stuff like gallium-nine, for instance. Is that what this project is doing? Though obviously that’s just for the Direct3D side of things and there’s a lot more to DirectX than just that. Still, it’s hard not to question how much of this is just duplicating work already done for Wine.
Could be big. Love wine but even games with native release for Linux have wine reliance
This seems incorrect, if it’s running natively, it doesn’t need to rely on wine…
There’s a few Linux “native” releases on steam that use compatibility layers based on wine behind the scenes, which I think is probably what they mean.
Also, this feels wrong, but… Is wine native? It’s mostly just the windows api implemented as Linux libraries. What’s the distinction that makes it “non-native” compared to other libraries? Is SDL non-native too?
Yes this is what I meant, thank you.
Cities skylines is one example.
“Native” means “no platform specific runtime translation layers”. An app built on SDL does the translation to the final rendering API calls at compile time. But a DX app running on Linux has to do jit translation to ogl/vk when running through wine, which is just overhead.
My understanding is that DXVK implements the Direct3D API using vulkan behind the scenes. So, sure, there might be a bit of overhead versus a more direct implementation. Frankly this doesn’t feel all that different from something like SDL to me. Shaders will have to be compiled into shaders that Vulcan understands, but you could just think of this as part of the front end for shader compilation.
I do agree that it feels less native to me too (particularly over the rest of wine), but it’s sort of an arbitrary distinction.
An app running on SDL which targets OGL/vulkan is going through all the same levels of abstraction on windows as it is Linux. The work needed at runtime is the same regardless of platform. Therefore, we say it natively supports both platforms.
But for an app running DX, on windows the DX calls talk directly to the DX driver for the GPU which we call native, but on Linux the DX calls are translated at runtime to Vulkan calls, then the vulkan calls go to the driver which go to the hardware. There is an extra level of translation required on one platform that isn’t required on the other. So we call that non-native.
Shader compilation has its own quirks. DX apps don’t ship with hlsl, they precompile their shaders to DXIL, which is passed to the next layer. On windows, it then gets translated directly to native ISA to be executed on the GPU EUs/CUs/whatever you wanna call them. On Linux, the DXIL gets translated to spir-v, which is then passed to the vulkan driver where it is translated again to the native ISA.
But also, the native ISA can be serialized out to a file and saved so it doesn’t have to be done every time the game runs. So this is only really a problem the first time a given shader is encountered (or until you update the app or your drivers).
Finally, this extra translation of DXIL through spir-v often has to be more conservative to ensure correct behavior, which can add overhead. That is to say, even though you might be running on the same GPU, the native ISA that’s generated through both paths is unlikely to be identical, and one will likely perform better, and it’s more likely to be the DXIL->ISA path because that’s the one that gets more attention from driver devs (ex. Nvidia/amd engineers optimizing their compilers).
You’re not wrong, and the translation layers definitely do make a difference for performance. Still, it’s not all that different from a slightly slow slightly odd “native” implementation of the APIs. It’s a more obvious division when it’s something like Rosetta that’s translating between entirely different ISAs.
SDL isn’t adding any runtime translation overhead, that’s the difference. SDL is an abstraction layer just like UE’s RHI or the Unity Render backends. All the translation is figured out at compile time, there’s no runtime jitting instructions for the given platform.
It’s a similar situation with dynamic libraries: using a DLL or .so doesn’t mean you’re not running code natively on the CPU. But the java or .net runtimes are jiting bytecode to the CPU ISA at runtime, they are not native.
I’m sorry if I’m not explaining myself well enough, I’m not sure where the confusion still lies, but using just SDL does not make an app not-native. As a linux gamer, I would love it if more indie games used SDL since it is more than capable for most titles, and would support both windows and Linux natively.
I didn’t see any wine binaries in my Linux native game. Care to give a few examples?
I think anything that CodeWeavers helped port. I think Bioshock Infinite is one such game. I’m not sure if you’d see wine binaries, though, could all be statically linked in.
How is this different from DXVK?
It’s made to interact directly with the GPU instead of translating it to the equivalent GPU call in Vulkan.
Iirc, DXVK translates DirectX API calls to Vulkan calls, meaning the original game renders to Vulkan in the end. With this, no translation will be needed which should result in slightly better performance and more likely, much better compatibility.
IIRC the translation overhead is usually negligible and sometimes results in better performance due to Vulkan being very performant.
Doesn’t DirectX require a lot of stuff from winapi?
Thought so as well. In which case I do not really see much difference between this and other translation layers.
Excited to see how this plays out. Looks like there’s basically nothing implemented yet though.
Holly Fuck!
Poor Holly
Tis the season (for holly).
One of these days I’ll be able to play quake in QubesOS
Play Xonotic in Linux. Or Quake.
Is anyone still playing Xonotic? I used to play Nexuiz back before they sold the name, and tried Xonotic recently, only to find servers with maximum one other player idling around. I genuinely thought it’s dead.
I haven’t played for a year or two, but Xonotic doesn’t have many concurrent player for most of the day. I believe lobbies filled around evening/night UTC±0, iirc.
I know 4 major servers: Feris and 3 Jeff’s. Feris is most populated one during European night. And there are pickup servers that usually play 4v4. Also smaller servers exist that occasionally get players like xonotic-relax.ru.
Doesn’t work in Qubes
Xonotic or Quake?
I use Qubes btw. But you wouldnt even know.