Marxism-Fennekinism

(He/him) Marxist-Leninist and amateur writer. I like cats, foxes, sci-fi, science fantasy, and Pokemon Mystery Dungeon. Message me for my roleplay ideas!

Lemmygrad: https://lemmygrad.ml/u/HiddenLayer5

Discord: LinuxFennekin#5514

Reddit: /u/HiddenLayer5

  • 57 Posts
  • 685 Comments
Joined 4 years ago
cake
Cake day: August 14th, 2020

help-circle
  • I think the main issue for the companies is that power adapters have a nearly unlimited lifespan in comparison to lithium batteries, so it would be less profitable for them to sell you a direct attached power adapter than a bunch of batteries and a charger where you have to keep crawling back to them when the batteries inevitably give out in three years.

    It would be trivial to design a blank battery attachment with a DC jack, and just have it hooked up to what is essentially a beefed up laptop charger. There are plenty of applications where a corded tool is perfectly adequate and even superior to cordless tools, so the fact that none of the manufacturers have it as an option hints that it was a business decision as opposed to merely an oversight.




  • I wonder how easy it is to DIY something like that. Like would it be as easy as picking up an off the shelf power supply with the right voltage and current and 3D printing an attachment that fits into the battery slot with a DC jack on the side (or even just gutting a dead battery pack and taking out the batteries and control electronics, soldering a DC jack straight onto the main contacts, and drilling a hole for it to poke through)? Or do modern power tools actually need to authenticate the battery with some kind of tool DRM?







  • Thank you.

    Say there’s some exploit that allows some component of KDE to be used to read a file. If it’s running under an unprivileged user - it sucks. Everything in user’s homedir becomes fair game. But if it runs as root - it’s simply game over. Everything on the system is accessible. All config, all bad config, files of all applications (databases come to mind). Everything.

    This is also something I’m thinking about: All the hard drives mounted on the server is accessible to the only regular user as that is what my other computers use to access them. I’m the only one with access to the server so everything is accessible under one user. The data on those drives is what I want to protect, so wouldn’t a vulnerability in either KDE or Firefox be just as dangerous to those files even running as the regular user?

    Also, since my PC has those drives mounted through the server and accessible to the regular user that I use my PC as, wouldn’t a vulnerability in a program running as the regular user of my PC also compromise those files even if the server only hosted the files and did absolutely nothing else? Going back to the Firefox thing, if I had a sandbox breach on my PC, it would still be able to read the files on the server right? Wouldn’t that be just as bad as if I had been running Firefox as root on the server itself? Really feels like the only way to 100% keep those files safe is to never access them from an internet accessible computer, and everything else just falls short and is just as bad as the worst case scenario, though maybe I’m missing something. Am I just being paranoid about the non-root scenarios?

    How does a “professional” NAS setup handle this?


  • Please don’t do this! DEs are not tested to be run as root! Millions of lines of code are expected to not have access to anything they shouldn’t have and as such might be built to fail quietly if accessing something they shouldn’t in the first place. Same thing applies to Firefox, really.

    Could you elaborate on this? I’m genuinely surprised because Fedora just asks you if you want to have the option to log into root from KDE during installation, so I always just assumed that it’s intended to be used that way.


  • I had actually moved from a fully CLI server to one with a full desktop when I upgraded from a single board computer to x86. The issue is that it’s not just a NAS, but I regularly use it to offload long operations (moving, copying, or compressing files, mostly) so I don’t need to use my PC for those. To do that I just remote into it and type in the command, then I can turn my PC off or do whatever without affecting the operation. So in a way it’s a second PC that also happens to be a server for my other machines.

    I use screen occasionally, and I used to use it a lot more when it was CLI only, but I find it really unwieldy due to how it manages multiple active terminals where you have to type in the ID of each screen to go back into it, and also because it refuses to scroll even when run in a terminal emulator that supports scrolling, where it just cycles between recent commands when you move the scroll wheel.

    Not trying to make excuses, just trying to explain my reasoning. I know it’s bad practice and none of these are things I’d do if I was managing an actual production server, but since it’s only accessible from my LAN I tend to be a lot more lax with it.

    I’m wondering if I could benefit from some kind of virtualized setup that separates the server stuff while still letting me remote into a desktop on the same machine for doing stuff, or if I can get away with just remoting into not the root user. Though I’ve never used a hypervisor and have no idea how to so I’m not sure how well that would go, since the well-known open source ones like Xen seem really technical and really feels like something not meant to be used outside an actual data centre.


  • Mainly that. I want to be able to have multiple terminal windows open and have them stay open independent of my main PC. Part of the reason I have a file server instead of plugging all the drives into my PC is so I can offload processor heavy operations onto it (namely making archives and compressing files for long term storage) so I don’t have to use my PC for that.

    People have mentioned programs like screen but IMO it’s way more annoying to juggle multiple terminals with it than if they were just windows, and also screen doesn’t scroll so whatever goes beyond the top edge is just inaccessible which I find really annoying. I’ve also been screwed by mistyped file operations on the terminal before (deleting stuff I didn’t mean to mainly) and I just find it safer to use a GUI file manager where it’s a lot harder to subtly mess something up and not notice until it’s too late.



  • I hope this is done over VPN and that you have 2FA configured on the VPN endpoint? Please don’t tell me it’s just portforward directly to a VNC running on the servers or something similar because then you have bigger problems than just random ‘oops’.

    I have never accessed any of my servers from the internet and haven’t even adjusted my router firewall settings to allow this. I kept wanting to but never got around to it.

    Since these are home systems the potential monetary damage from downtime and re-install isn’t huge, so personally I’d just take the hit and wipe/reinstall. I’d learn from my mistakes and build it all up again with better routines and hygiene. But that’s what I’d do.

    Yeah this and other comments have convinced me to reinstall and start from scratch. Will be super annoying to set everything back up but I am indeed paranoid.