Wonder if it will be CTRL + SHIFT + ALT + WIN + C
Wonder if it will be CTRL + SHIFT + ALT + WIN + C
No one is advocating X11. It’s hard to have a constructive conversation about the shortcomings of Wayland when every apologist seems to immediately go off topic.
“I don’t want to listen because you don’t know the technical challenges. Oh, you have a long list of credentials? I don’t want to listen to an argument from authority. X11 bad, therefore Wayland good.”
OP even brings up Mir, but you never see Wayland proponents talk about why they think Wayland is better.
I’m learning a lot, so I’m not a fan of the people flaming and downvoting OP for having genuine confusion. I want us to incentivize more posts like this.
Yes, the jitting is specific to the graphics APIs. DXVK is doing runtime translation from DX to VK. When possible, they are certainly just making a 1:1 call, but since the APIs aren’t orthogonal, in some cases it will need to store state and “emulate” certain API behavior using multiple VK calls. This is much more the case when translating dx9/11.
Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist
Alright. Just letting you know you’re going to have a hard time communicating with people in this industry if you continue rejecting widely accepted terminology. Cheers.
So, here’s the thing, I don’t consider myself an expert in many things, but this subject is literally my day job, and it’s possibly the only thing I do consider myself an expert in. And I’m telling you, you are confused and I would gladly help clear it up if you’ll allow me.
They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa
Nvidia’s OGL driver is a driver. Mesa’s radv backend is a driver. Nouveau, the open source Nvidia meds backend is a driver. An opengl implementation does a driver make.
There was a time they did, yes
What GPU did Microsoft’s driver target? Or are you referring to a software implementation?
Yes and No… DirectX 3D was always low-level
You literally said that Mantle was inspired by DX12, which is false. You can try to pivot to regurgitating more Mantle history, but I’m just saying…
No its not, see above…
Yes, it is, see above my disambiguation of the term “low-level”. The entire programming community has always used the term to refer to how far “above the metal” you are, not how granular an API is. The first party DX9 and DX12 drivers are equally “low-level”, take it from someone who literally wrote them for a living. The APIs themselves function very differently to give finer control over the API, and many news outlets and forums full of confused information (like this one) like to infer that that means it’s “lower level”.
Your last statement doesn’t make sense, so I don’t know how to correct it.
you still have additional function calls and overhead wrapping lower level libraries
But it all happens at compile time. That’s the difference.
You probably wouldn’t consider C code non-native
This goes back to your point above:
It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language
C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.
usually you consider compilers that use C as a backend to be native code compilers too
I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.
so why would you consider HLSL -> SPIR-V to be any different?
Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).
Shader code is jitted on all PC platforms, yes. This is why I said above that shader code has its own quirks, but on platforms where the graphics API effectively needs to be interpreted at runtime, the shaders have to be jitted twice.
SDL isn’t adding any runtime translation overhead, that’s the difference. SDL is an abstraction layer just like UE’s RHI or the Unity Render backends. All the translation is figured out at compile time, there’s no runtime jitting instructions for the given platform.
It’s a similar situation with dynamic libraries: using a DLL or .so doesn’t mean you’re not running code natively on the CPU. But the java or .net runtimes are jiting bytecode to the CPU ISA at runtime, they are not native.
I’m sorry if I’m not explaining myself well enough, I’m not sure where the confusion still lies, but using just SDL does not make an app not-native. As a linux gamer, I would love it if more indie games used SDL since it is more than capable for most titles, and would support both windows and Linux natively.
An app running on SDL which targets OGL/vulkan is going through all the same levels of abstraction on windows as it is Linux. The work needed at runtime is the same regardless of platform. Therefore, we say it natively supports both platforms.
But for an app running DX, on windows the DX calls talk directly to the DX driver for the GPU which we call native, but on Linux the DX calls are translated at runtime to Vulkan calls, then the vulkan calls go to the driver which go to the hardware. There is an extra level of translation required on one platform that isn’t required on the other. So we call that non-native.
Shader compilation has its own quirks. DX apps don’t ship with hlsl, they precompile their shaders to DXIL, which is passed to the next layer. On windows, it then gets translated directly to native ISA to be executed on the GPU EUs/CUs/whatever you wanna call them. On Linux, the DXIL gets translated to spir-v, which is then passed to the vulkan driver where it is translated again to the native ISA.
But also, the native ISA can be serialized out to a file and saved so it doesn’t have to be done every time the game runs. So this is only really a problem the first time a given shader is encountered (or until you update the app or your drivers).
Finally, this extra translation of DXIL through spir-v often has to be more conservative to ensure correct behavior, which can add overhead. That is to say, even though you might be running on the same GPU, the native ISA that’s generated through both paths is unlikely to be identical, and one will likely perform better, and it’s more likely to be the DXIL->ISA path because that’s the one that gets more attention from driver devs (ex. Nvidia/amd engineers optimizing their compilers).
I think you are confused about the difference between the opengl spec and an actual implementation of the spec, and who is responsible for shipping what.
“Native” means “no platform specific runtime translation layers”. An app built on SDL does the translation to the final rendering API calls at compile time. But a DX app running on Linux has to do jit translation to ogl/vk when running through wine, which is just overhead.
So,
you’re called an exception, not a rule. Just because YOU need windows doesn’t mean literally no one would have have use for ewaste revived through Linux.
I run programs made exclusively for windows on Linux using wine daily. And
maybe you like to fuck around distro hopping when you use Linux, the rest of us just fucking use our computers like a normal person. (See, I can be condescending too).
Someone should open a business taking free perfectly good laptops people were going to throw out, putting Linux on them, and reselling them.
Goodwill could do this with anything they get donated.
They have an option in their own pamac GUI to enable the AUR. IMO if they want to send the message that it will cause issues and it shouldn’t be used, they shouldn’t make it so easy to enable. Or if they do want to make it that easy, display a clear disclaimer about the issues you can expect to run into if you try it.
Welcome to the community! I think you can learn to like the terminal with time :). But more power to you if you can use Linux without ever touching the command line.
I do think the only real way to compete with the windows/mac UX is to never show a command line to someone who doesn’t know what to do with it, and still remain operational. As for now, with most distros, if certain things fail to load you end up looking at a command line (not sure about Ubuntu or ChromeOS).
It’s important to know that, just because your computer booted to a command line doesn’t mean the whole system is hosed. It’s likely just means a UI program failed to start for some reason and otherwise your system is working fine.
I have the same question. I remember a while back Linus went off on someone for using the term “woke communist”, so that probably made the rounds in the trans community. Might be what they’re referring to.
I’d prefer Linux not be tied to the politics of its creator. I don’t expect Linus to be a perfect person any more than I expect Linux to be a perfect OS. But one of those can be fixed with a quick patch.
It’s definitely for troll farming reasons. Most likely they’re using it to create legit-seeming accounts that they can then sell to a troll farm who will use it to influence a product or an election or something. Using AI to slightly vary content that they already know goes viral easily makes finding new content to share much cheaper.
IMO the single biggest risk to the fediverse is allowing one instance to control everything. I think there needs to be a set of ideologies that all fediverse users agree to abide by for the good of the fediverse, the first of which would be: instances should not federate with other instances that are too big. I don’t think “too big” needs to be strictly defined, it could be left up to people to decide, but if users or other instances think you’re idea of “too big” is too big, they are all free to leave/defederate from you too.
Idk anything about Flipboard, much less how many users they have, but if it’s much larger than existing servers, then I think they should make an effort to shrink, or at least freeze signups, or be defederated.
Also yes, I think mastodon.social and lemmy.world are too big and should make an effort to downsize asap.
Even if we rewind to before the advent of AI generated images, if someone were to take his photo of his art, and painstakingly use Photoshop to create a believable second image with a different person standing next to it representing it as their own without giving him any credit, we would call that process “stealing”.
Let’s Encrypt is good practice, but IMO if you’re just serving the same static webpage to all users, it doesn’t really matter.
Given that it’s a tiny raspi, I’d recommend reducing the overhead that WordPress brings and just statically serve a directory with your site. Whether that means using wp static site options, or moving away from wp entirely is up to you.
The worst case scenario would be someone finding a vulnerability in the services that are publicly exposed (Apache), getting persistence on the device, and using that to pivot to other devices on your network. If possible, you may consider putting it in a routing DMZ. Make sure that the pi can only see the internet and whatever device you plan to maintain it with. That way even if someone somehow owns it completely, they won’t be able to find any other devices to hack.