How about just LibreOffice resume templates?
Yoko, Shinobu ni, eto… 🤔
How about just LibreOffice resume templates?
That repo is just pure trolling, read the “Improved performance” section and open some source files and you’ll understand why.
A company accused by the far-right of being “too woke” is helping a far-right platform survive, the irony.
Lemmy needs an eyebleach community, now
I checked a few months ago and they still haven’t caught a single man in black
You haven’t read Detective Conan it seems. At least with One Piece the plot is progressing.
High quality shitpost right there
only one baguette? smh
FreeBSD is now obsolete
Sincerely, fuck you.
they obviously have upscalers in their brains
Biased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.
You just need more EXP to unlock the Appraisal skill
It’s as if you are in an isekai
Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).
Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.
So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them buffer_0
and buffer_1
. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0
, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1
. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1
, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0
this time and you keep swapping the buffers like this for the rest of your loop.
Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.
So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.
we have nVidia who clings selfishly to their proprietary blobs, and I can’t help but wonder how great it could be if they opened that up and let the community in.
Nvidia is doing that because they don’t want people to deploy gaming GPUs in datacenters, and they can currently enforce that through their driver license. That license is what enables them to force most enterprise users to buy expensive A100/H100 datacenter GPUs and rack in really fat margins when a couple of RTX 4090 cards would actually be enough to do the job with good cost efficiency. The control that Nvidia has with that license is not something they’re ready to give up and that’s why they keep giving the middle finger to the FOSS community.
(before anyone mentions vast.ai as a counter-example, those RTX 4090 compute sellers are indeed breaking Nvidia’s EULA)
“With the new Desktop Cube, you can switch between workspaces in 3D. Your app windows float off the desktop surface with a parallax effect, so you can see behind them,” said the Zorin OS team. “There’s also the new Spatial Window Switcher, which replaces the standard flat Alt+Tab and Super+Tab dialog with a 3D window switcher.”
Compiz Fusion is an idea and ideas never die
also Proton-GE with AMD FSR is basically just like downloading more FPS no matter which game you’re playing
Gonna fire the first bullet:
(I also use Arch btw)
Me when someone’s Ubuntu install reaches EOL: just install Arch