Just a stranger trying things.

  • 3 Posts
  • 98 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle

  • I understand your position. There is a learning curve to containers, but I can assure you that getting your basics on the topic will open a whole new world of possibilities and also make everything much easier for yourself. The vast majority of people run containers which make the services less brittle because they have their own tailored environment and don’t depend on the host libraries and packages and also brings increased security because the services can’t easily escape their boundaries rendering their potential vulnerabilities less of an issue compared to running those same services bare metal.

    I started on synology too. There is a website called Marius hosting which focuses on tutorials for containers on synology, but his instructions have been updated the last few years to focus on spinning up containers manually rather than through the UI, which makes it more intimidating than it needs to be for beginners… I’ll link it here just as a reference. I’ll see if on the way back machine he shows the easier way and report back if I find something.

    Edit: yes here is an original tutorial for Jellyfin (this method still works for me and is still how I use docker lately): https://web.archive.org/web/20210305002024/https://mariushosting.com/how-to-install-jellyfin-on-your-synology-nas/


  • To answer your question more specifically, most people set up the pi with docker, using services which have a front end accessible in the browser. They basically use their browser to navigate to the front end of the service they want to use and administer it like that. For instance portainer to manage their docker containers, or pihole for managing their firewall, or even jellyfin for their media which is both the website to consume the media and has an administrator dashboard.

    Edit: this is in complement to using something like tailscale which basically allows you to access these services away from home. They work in conjunction.




    1. There is no GrapheneOS account.
    2. GrapheneOS has some built in apps, namely for SMS, gallery viewer, camera, PDF reader, calculator, contacts, files, phone and web browser (vanadium, based on chromium). GrapheneOS offers no cloud. You are responsible for using the service of your choice to manage and backup your data. It is currently undergoing a transition for backup management, but otherwise you can make use of a selfhosted service like nextcloud.
    3. GrapheneOS does come preinstalled with its own app store but that it is reserved to GrapheneOS apps and the distribution of certain google services which can be optionally installed using their sandbox. Besides that, you can indeed install the aurora store to get access to the free apps on the google play store, or actually use the google play store. They can all be installed and used simultaneously. Though you might want to be mindful of you install an app on one store to not update it on another as the two versions could work differently (e.g. an app installed on f-droid might have a different notification system than one on the google play store). You do not need to use nextcloud if you don’t want to. GrapheneOS has no dependencies on any other additional app. It is a standalone OS. Once you install it, you use it however you want.

    Edit: one key advantage of GrapheneOS is the possibility of using multiple users. You can (and I recommend it) separate apps into different user profiles. You can for instance dedicate one user profile to apps requiring Google services, let’s call it Gapps. GrapheneOS then allows you to then pipe your notifications between user accounts, so if you are in your main user profile you can get notifications from apps running in Gapps in the background. Very convenient.







  • Fedora is still pretty frequently and recently up to date with respect to packages and kernel, not sure you’d be losing much over arch.

    But the debate to me is also not that important, I’ve been running fedora and have at some few occasions gotten some instabilities due to updates (mostly Nvidia with Wayland) so I can totally understand someone wanting stability and reliability over bleeding edge).






  • This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.