What about us who will never want to see any ads ever in our life? Can these companies force fed them to us and we kind of just accept that?
What about us who will never want to see any ads ever in our life? Can these companies force fed them to us and we kind of just accept that?
For example the Hetzner servers are cheap and have been serving me well for many years. The big clouds are for companies with enough funding. If you need personal servers, the VPS providers give good value for money.
I’ve been digging into the settings of this printer and, sadly the only send it can do is as a fax… It’s the entry model, been serving us for years very nicely. It even connects to the internet, but misses features such as email, smb or ftp. For me this looks like something an open source firmware could fix. It has enough processing power to possibly run a lightweight Linux distribution, so installing one that would enable modern communication protocols doesn’t seem impossible.
This was it for me now, installed paperless-xng, set it up to scan my email folders, copied all random PDFs from my “organized” tax folder and scanned the rest.
Too bad I just happen to have that Brother printer/scanner without SMB or FTP support. So I need to go through the process of scanning on my computer first, then uploading.
Of course. My setup now is a Proxmox server + a NAS. What I’m planning to do is to install a service for this to Proxmox, then have the files synced over NFS to the NAS, which then backs them up every night to Backblaze. And of course I need to have the paper copies too, but to be able to search, tag and archive the documents is great when you need to remember a thing X that was mentioned in a paper I got back in 2014.
It just doesn’t feel right to have multiple postgres databases running, if every other service uses the one in the network. Having already monitoring, disk space and backups set…
Installed it because of this thread to my homelab today. I never really managed my phone images in any way, never uploaded them anywhere. This was the first time. About 5 gigabytes of images and videos were synced to my NAS in a few minutes, now I can search them and all that. It’s a pretty cool setup, although the installation is a bit tricky if you don’t go to the path they give you. I run a Postgres server in Proxmox, and you have to install just the right version of pgvecto.rs for the system to work.
Browsing the issues I was able to figure out what went wrong, and after downgrading, no issues.
Cloudflare R2 is the cheapest here, it’s free for some gigabytes and egress is free too.
To be honest, I’d just disable image uploads…
As said in the thread, you need some kind of tunnel that stays up and doesn’t need to be fixed if the internet goes down.
Wireguard, or if wanting super easy setup, Tailscale version of Wireguard is great for this. Now you have a private IP address in your VPN network to your home server, that stays up and answers to HTTP. Next thing you need is a cheap VPS somewhere with a public IP address. When that is running, and is in the Wireguard network so you can access your home server from the VPS, you need a Nginx proxy in the public server. Either do it by hand, or use a service such as the Nginx Proxy Manager to handle the proxy setup.
How it basically works is you register a domain name (A, CNAME) to the public VPS service, then with Nginx you setup that anything coming in to the domain X should be proxied to the VPN IP address Y and port Z. Now you can add HTTPS to this domain and get a Let’s Encrypt certificate for it. You can, again, do this manually with Nginx, or let Nginx Proxy Manager handle it for you.
Finally. Stay safe. If you really open services to public internet from your home, be very sure to have all the latest updates and use strong passwords in all of them. Additionally, you can use the home services directly from the Wireguard/Tailscale network by accessing them using the private IP addresses. Your computer should just be in the same network with them.
I’m running it in my homelab for projects I do not (yet) push anywhere public, and projects containing private items such as ssh keys. It is snappy and has a ton of features. I can imagine when the federation support works, one can set up their own git forge and contribute more easily to other forges no matter what software they run.
And, to be honest, that is already how git works if you use the email workflow. Here we just get a web based flow with federated issues and pull requests. But if email is enough for you, you can have a full federation with email and git.
I borrowed an installation CD from the local library around 1998. It was RedHat 5.x, and I started messing around with it due to me being interested in alternative operating systems. Before it, I had OS/2 Warp 3.0 in our IBM Pentium 100 MHz family computer which didn’t really do it for me to be honest.
It took weeks to get anything working with Linux. I went to the library, borrowing books. In our middle school we had an internet connection, so I utilized it to learn how to configure modelines correctly to get X11 running.
When it did finally run, the default window manager was FVWM95, almost like Windows 95!
I used OSX a few years in the power PC times, just to switch back to Linux around 2008.
Edit: my real love for Linux started when I got Debian running. RedHat didn’t have anything comparable to apt those days. You needed to download RPM packages manually with all the dependencies, while apt just worked with one command.
Or AMD 6000 series if power draw and quietness are important. Add Proxmox with ZFS to run all your apps in containers or VMs.
It is just me wanting to filter 🍎 completely from the instance, so all mentions to 🍎 products get redacted. That is kind of an insider joke due to that company being so prevalent in internet forums such as HN or Reddit. At least in my own instance all mentions of removed are hidden.
Divide and conquer…
I use podman on NixOS. It’s cool, but be warned there are subtle and less subtle differences.
The docker desktop does. It is very tricky to install docker without it on the Mac.
You can try installing it on GitHub actions for your CI runs with the Mac runner. It can be done, but takes forever, is hacky and breaks very often.
Yep. I switched from xorg/i3 years ago, and it was already super snappy back then compared to the previous setup. Today everything works with Wayland, and I don’t really need to think about it anymore.
But, ymmv. I avoid Nvidia’s products, which helps a lot for the stability.
That’s why you write your protocol as a sync library, then implement the async IO separately and mapping the data over the protocol modules.