• 2 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Depends on whom you ask from. For me selfhosting is all about the software and renting hardware is a perfectly fine solution for that. You don’t need to worry about UPS’s, maintaining hardware and all the jazz which comes with your own gear. Sure, then you’re depending on your VPS provider that services actually stay up, but even a small VPS provider has more people working on things than just yourself. And they have power solutions, like industrial scale power solutions with generators, multiple connection points to the internet and things like that which are either impossible or very expensive to set up just for your own hardware.

    And then there’s the other side, like home automation, where relying on internet connectivity to get your lights on is, in my opinion, a bit silly thing to do for yourself and running server for that locally makes perfect sense. So, right solution depends on your needs, but if you want to define what counts as self hosting in my opinion it boils down to who has the root/administrator credentials on your server. Other may have different opinions.


  • While I agree with @rglullis@communick.news, this isn’t strictly speaking on-topic for this community, that kind of knee-jerk response is very much out of the topic as well. The first community rule is to be civil and in general I, perhaps optimistically, would like that conversation over fediverse in global would be civil, or at least well argumented, a bit like it used to be (more or less, YMMV) back in the usenet days.

    And on the topic of self-hosting, that’s a line drawn in the water. I run various of things by myself (postfix+dovecot, LAMP, bitwarden, seafile, nextcloud…) on a rented servers running linux+kvm. And I get money by doing that, it’s a very much a business case, so I’m a bit reluctant to ask questions about the setup I have in here as I think it wouldn’t be fair to ask for advice from hobbyists in a project where money is directly involved. But for me personally that setup checks both sides of things. I get money by doing it, but at the same time I personally can get out of the walled gardens like M365 or Gsuite.

    TL;DR: There’s no need to be rude, you can choose to politely point people in the right direction.






  • And if you’re concenred on data written on sectors since reallocated you should physically destroy the whole drive anyways. With SSDs this is even more complicated, but I like to keep it pretty simple. If the data which has been stored on the drive at any point of it’s life is under any kind of NDA or other higly valuable contract it’s getting physically destroyed. If the drive spent it’s life storing my family photos a single run of zeroes with dd is enough.

    At the end the question is that if at any point the drive held bits of anything even remotely near a cost of a new drive. If it did it’s hammer time, if it didn’t, most likely just wiping the partition table is enough. I’ve given away old drives with just ‘dd if=/dev/zero of=/dev/sdx bs=100M count=1’. On any system that appears as a blank drive and while it’s possible to recover the files from the drive it’s good enough for the donated drives. Everything else is either drilled trough multiple times or otherwise physically destroyed.


  • IsoKiero@sopuli.xyztoSelfhosted@lemmy.worldProper HDD clear process?
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    11 months ago

    Dd. It writes on disk at a block level and doesn’t care if there’s any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that’s it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

    And if you’re concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you’re worried enough about that scenario then I’d suggest using an arc furnace and literally melting the drives into a exciting new alloy.



  • According to my spotify wrapped I listened to about 2500 different artists. Yearly subscription is 143,88€, so if spotify took 30% and ther rest is split equally to every artist they’d get a nice 0,0578€ from me each. For your $26 that’d mean on similar math that you’d need ~450 listeners, so it’s atleast nearby the ballpark if you have 1000 streams on there.

    I obviously omitted things like VAT and other taxes, payment processor fees and complexity of revenue streams in general, like how long I listened to each to keep it simple.

    I’m not saying if that’s fair or not, I just did quick and rough math around the data I had easily available. All I know is that for that half a cent per artist I’m not providing anything to anyone but I receive quite a lot every day.

    For more detailed info you can check spotifys own report.



  • You can’t configure DNS server by name on anything, so you’d need some kind of script/automation to query current IP address of your pihole from google/your ddns provider/someone and update that on your parents router which can be a bit tricky or straight impossible depending on the hardware.

    VPN would solve both 1 and 2 from your list as your pihole would be available with static address on both locations. You can’t authenticate on DNS server by MAC as you don’t receive originating MAC at all. Other solution would be to get a static IP address from some provider and tunnel traffic so that your pihole could be reached trough that static address.


  • My sons are in that age bracket and when they requested a laptops for themselves (older sister got one for school stuff) I “borrowed” decommissioned thinkpads from work, threw empty ssd’s on them and gave computers to boys with linux mint installer on usb-stick. Younger one got it running in couple of hours without any help and is actively learning on how to use the thing, yesterday he told me how he had learned to open software using keyboard shortcuts and in general is interested about the tinkering aspect of things. Older one has a bit more pragmatic approach, he got the installation done as well but he’s not interested about the computer itself as it’s just a tool to listen to a music, look up for tutorials for his other interests and things like that.

    Both cases are of course equally valid and I’m just happy that they are willing to learn things beyond just pushing the buttons. But I’m also (secretly) happy that my youngest shares my interests and he’s been doing simple games with scratch and in general shows interest on how the computers, networking and other stuff actually works.


  • Ubuntu’s is opt out not opt in

    I haven’t installed ubuntu in a while, but in EU you need to have prior consent from the user to gather any kind of data and if I remember correctly I haven’t seen such thing. And it’s not enough to bury that into documentation and say ‘if you use our software you allow us to blah blah’, you must get consent via an action from the user which spesifically allows that, so if telemetry comes silently with ‘apt dist-upgrade’ it’s not enough.



  • Broken computers aren’t really stressful to me anymore, but it sure plays a part that I kinda-sorta had waited for reason to wipe the whole thing anyways and as I could still access all the files on the system, so in the end it was somewhat convenient excuse to take the time to switch the distribution. Apparently I didn’t have backup for ~/.ssh/config even if I thoguht I did, but those dozen lines of configuration isn’t a big deal.

    Thanks anyway, a good reminder that with linux there’s always options to work around the problem.



  • Would I be correct to assume that you’ve been hurt by Btrfs in its infancy and choose to not rely on it since?

    I have absolutely zero experience with btrfs. Mint doesn’t offer it by default and I’m just starting to learn bits’n’bobs of zfs (and I like it so far) so I just chose it with an idea that I can learn it on a real world situation. I already have zfs pool on my proxmox host, but for that I hope I’d gone with something else as it’s pretty hungry for memory and my server doesn’t have a ton to spare. But reinstalling that with something else is a whole another can of worms as I’d need to dump couple terabytes worth of data to somewhere else in order to make a clean install. I suppose it might be an option to move data around on the disks and convert the whole stack to LVM one drive at the time, but it’s something for the future.

    But I imagine you couldn’t care less 😜.

    I was a debian only user for a long time but when woody/sarge (back in 2005-2006) had pretty old binaries compared to upstream and ubuntu started to gain popularity I switched over. Specially the PPA support was really nice back then (and has been pretty good for several years), so specially for a desktop it was pretty good and if I’m not mistaken you could even switch from debian to ubuntu only by editing sources list and running dist-upgrade with some manual fixes.

    So, coming from a mindset that everything just works and switching from a release to another is just a bit longer and more complex update the current trend rubs me in a very much wrong way.

    So, basically the tl;dr is that life is much more complex today than it was back in the day where I could just tinker with things for hours without any responsibilities (and there’s a ton more to tinker with, my home automation setup really needs some TLC to optimize electricity consumption) so I just want an OS which gets out of my way and allows me to do whatever I need to whenever I need it. Immutable distro might be an answer, but currently I don’t have spare hours to actually learn how they work. I just want my sysVinit back with distributions which can go on for a decade without any major hiccups.


  • Great piece of information. I personally don’t see the benefits with immutable distribution, or at least it (without any experience) feels like that I’ll spend more time setting it up and tinkering with it than actually recovering from a rare cases where things just break. Or at least that’s the way it’s used to be for a very long time and even if something would break it atleast used to be pretty much as fast as reverting a snapshot to fix the problem. Sure, you need to be able to work on a bare console and browse trough log files, but I’m old enough that it was the only option back in the day if you wanted to get X running.

    However the case today was something that I just couldn’t easily fix as the boot partition just didn’t have enough space (since when 700MB isn’t enough…) even a rollback wouldn’t have helped to actually fix the installation. Potentially I might had an option to move LVM partition on the disk to grow boot partition, but that would’ve required shrinking filesystem first (which isn’t trivial on a LVM PV) and the experience ubuntu has lately provided I just took the longer route and installed mint with zfs. It should be pretty stable as there’s no snap packages which update at random intervals and it’s a familiar environment for me (dpkg > rpm).

    Even if immutable distros might not be for my use case, your comment has spawned a good thread of discussion and that’s absolutely a good thing.