It’s the same as glxgears but for EGL and Wayland. It tests that OpenGL works.
It’s the same as glxgears but for EGL and Wayland. It tests that OpenGL works.
BlueSky is its own thing with its own federated protocol called ATproto. They have an explanation in their docs on how it works, different features. There’s a bridge between the two as well, a bit janky but effective.
Yeah, that didn’t stop it from pwning a good chunk of the Internet: https://en.wikipedia.org/wiki/Log4Shell
IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.
With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.
That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.
I’ve fogotten Minecraft servers online and really nothing happened whatsoever.
Kind of but also not really.
Docker is one kind of container, which itself is a set of kinds of Linux namespaces.
It’s possible to run them as if they were a virtual machine with LXC, LXD, systemd-nspawn. Those run an init system and have a whole Linux stack of their own running inside.
Docker/OCI take a different approach: we don’t really care about the whole operating system, we just want apps to run in a predictable environment. So while the container does contain a good chuck of a regular Linux installation, it’s there so that the application has all the libraries it expects there. Usually network software that runs on a specified port. Basically, “works on my machine” becomes “here’s my whole machine with the app on it already configured”.
And then we were like well this is nice, but what if we have multiple things that need to talk to eachother to form a bigger application/system? And that’s where docker-compose and Kubernetes pods comes in. They describe a set of containers that form a system as a single unit, and links them up together. In the case of Kubernetes, it’ll even potentially run many many copies of your pod across multiple servers.
The last one is usually how dev environments go: one of them have all your JS tooling (npm, pnpm, yarn, bun, deno, or all of them even). That’s all it does, so you can’t possibly have a Python library that conflicts or whatever. And you can’t accidentally depend on tools you happen to have installed on your machine because then the container won’t have it and it won’t work, you’re forced to add it to the container. Then that’s used to build and run your code, and now you need a database. You add a MongoDB container to your compose, and now your app and your database are managed together and you can even access the other containers by their name! Now you need a web server to run it in a browser? Add NGINX.
All isolated, so you can’t be in a situation where one project needs node 16 and an old version of mongo, but another one needs 20 and a newer version of mongo. You don’t care, each have a mongo container with the exact version required, no messing around.
Typically you don’t want to use Docker as a VPS though. You certainly can, but the overlay filesystems will become inefficient and it will drift very far from the base image. LXC and nspawn are better tools for that and don’t use image stacking or anything like that. Just a good ol’ folder.
That’s just some applications of namespaces. All of process, network, time, users/groups, filesystems/mount can be independently managed so many namespaces can be in the same network namespace, while in different mount namespaces.
And that’s how Docker, LXC, nspawn, Flatpak, Snaps are kinda all mostly the same thing under the hood and why it’s a very blurry line which ones you consider to be isolation layers, just bundled dependencies, containers, virtual machines. It’s got an infinite number of ways you can set up the namespaces the ranges from seeing /tmp
as your own personal /tmp
to basically a whole VM.
It indeed doesn’t, its purpose is to show the differences and clarify why/where OP might have heard you need special care for portable installs on USB sticks.
All the guides and tutorials out there are overwhelmingly written with regular USB sticks in mind and not M.2 enclosures over USB. So they’ll tell you to put as much stuff on tmpfs as possible and avoid all unnecessary reads and writes.
We have to define what installing software even means. If you install a Flatpak, it basically does the same thing as Docker but somewhat differently. Snaps are similar.
“Installing” software generally means any way that gets the software on your computer semi-permanently and run it. You still end up with its files unpacked somewhere, the main difference with Docker is it ships with the whole runtime environment in the form of a copy of a distro’s userspace.
But fair enough, sometimes you do want to run things directly. Just pointing out it’s not a bad answer, just not the one you wanted due to missing intents from your OP. Some things are so finicky and annoying to get running on the “wrong” distro that Docker is the only sensible way to install it. I run the Unifi controller in a container for example, because I just don’t want to deal with Java versions and MongoDB versions. It just comes with everything it needs and I don’t need to needlessly keep Java 8 around on my main system potentially breaking things that needs a newer version.
Not really different than any other M.2 SSDs, that it’s over USB doesn’t matter.
The only consideration for USB sticks is that they’re usually quite crap, so running a system off it tends to use up the flash pretty quickly.
How is it unrelated? Running MongoDB in a container so that it just works and you have a portable/reproducible dev environment is a perfectly valid approach.
A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem.
Edit: so people stop asking, here’s their docs on DNA relatives: https://customercare.23andme.com/hc/en-us/articles/212170838
Showing your genetic ancestry results makes select information available to your matches in DNA Relatives
It clearly says select information, which one could reasonably assume is protecting of your privacy. All the reports seem to imply the hackers got access to much more than just the couple fun numbers the UI shows you.
At minimum I hold them responsible for not thinking this feature through enough that it could be used for racial profiling. That’s the equivalent of being searchable on Facebook but they didn’t think to not make your email, location and phone number available to everyone who searches for you. I want to be discoverable by my friends and family but I’m not intending to make more than my name and picture available.
Technically that’s compositor level stuff, and it probably can even treat it like an actual diagonal display and prevent windows from going there and everything.
This is a good example of why some of the protocols are taking so long. Once finalized, it’ll probably somehow also be capable of handling… that.
With an accelerometer and a compositor written for that can probably even keep it level in real time. Tilt monitor and windows rotate to match automatically.
The carriers love to brag about high capacity and fast speeds but they’re still unwilling to deliver the bandwidth. They’re all advertising “unlimited” data but if you scroll TikTok for a while they’ll block your line for “excessive” data usage or throttle you down to 256kb/s.
The irony with this is if incognito was really untracable then the government would be pushing to make it less secure just like they’re already actively trying to force backdoors in Signal and other actually private services because “think of the children”.
They mostly don’t exist yet apart from this PR.
On Vista and up, there’s only the Display Only Driver (DOD) driver which gets resolutions and auto resizing to work, but it’s got no graphical acceleration in itself.
It’ll definitely run Kali well, Windows will be left without hardware acceleration for 2D/3D so it’ll be a little laggy but it’s usable.
VMware has its own driver that converts enough DirectX for Windows to run smoother and not fall back to the basic VGA path.
But VMware being proprietary software, changing distro won’t make it better so it’s either you deal with the VMware bugs or you deal with stable but slow software rendering Windows.
That said on the QEMU side, it’s possible to attach one of your host’s GPUs to the VM, where it will get full 3D acceleration. Many people are straight up gaming in competitive online games, in a VM with QEMU. If you have more than one GPU, even if it’s an integrated GPU + a dedicated one like is common with most Intel consumer non-F CPUs, you can make that happen and it’s really nice. Well worth buying a used GTX 1050 or RX 540 if your workflow depends on a Windows VM running smoothly. Be sure your CPU and motherboard support it properly before investing though, it can be finicky, but so awesome when it works.
I think in this context it’s meant on a technical level: as far as the fediverse is concerned, there’s not a whole lot instances can do. Anyone can just spin up an instance and bypass blocks unless it works on an allowlist basis, which is kind of incompatible with the fediverse if we really want to achieve a reasonable amount of decentralization.
I agree that we shouldn’t pretend it’s safe for minorities: it’s not. If you’re a minority joining Mastodon or Lemmy or Mbin, you need to be aware that blocking people and instances has limitations. You can’t make your profile entirely private like one would do on Twitter or any of Meta’s products. It’s all public.
You can hide the bad people from the users but you can’t really hide the users from the bad people. You can’t even stop people from replying to you on another instance. You can refuse to accept the message on the user’s instance, but the other instance can still add comments that don’t federate out. Which is kind of worse because it can lead to side discussions you have no way of seeing or participate in to defend yourself and they can be saying a lot of awful things.
Well, I’m currently using VMware on Ubuntu
Well there’s your mistake: using VMware on a Linux host.
QEMU/KVM is where it’s at on Linux, mostly because it’s built into the kernel a bit like Hyper-V is built into Windows. So it integrates much better with the Linux host which leads to fewer problems.
Ubuntu imho is unstable in and of itself because of the frequent updates so I’m looking for another distro that prioritizes stability.
Maybe, but it’s still Linux. There’s always an escape hatch if the Ubuntu packages don’t cut it. But I manage thousands of Ubuntu servers, some of which are very large hypervisors running hundreds of VMs each, and they also run Ubuntu and work just fine.
I dislike frameworks for pretty similar reasons, but being a professional developer I can also see their value when you have to work as a team.
For example, with Ruby on Rails, you can jump from codebase to codebase and find your way reasonably quickly. Same with Laravel: I can open up a Laravel project, and immediately know what should be located roughly where.
From a DevOps perspective, they’re also useful in that I can reuse the same template over and over to deploy these kinds of apps, because they use the same services in the same way.
I’d still rather not use one but then other people grow the project organically in all sorts of directions, and when it’s the juniors, they get really lost and confused. I like to treat frameworks as semi-permanent training wheels. They’re not for me, they’re for the rest of my team.
All those do is essentially call the Cloudflare API. They’ll all work reasonably well. The linked Docker image for example is essentially doing the bulk of it in this bash script which they call from a cron and some other container init logic which I imagine is to do the initial update when the container starts.
Pick whatever is easiest and makes most sense for you. Even the archived Docker thing is so simple, I wouldn’t worry about it being unmaintained because it can reasonably be called a finished product. It’ll work until Cloudflare upgrades their API and shuts down the old one, which you’d get months to years of warning because of enterprise customers.
Personally, that’s a trivial enough task I’d probably just custom-write a Python script to call their API. They even have a python library for their API. Probably like 50-100 lines long tops. I have my own DNS server and my DDNS “server” is a 25 lines PHP script, and the client is a curl command in a cronjob.
DDNS is a long solved and done problem. All the development is essentially just adding new providers.
Same for KDE https://apps.kde.org/fr/kjournaldbrowser/