• 0 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: August 21st, 2023

help-circle
  • Not necessarily. You probably want to optimize the kernel and a few packages. Then there are some apps where you want to build them with specific features. Then there’s a bunch of stuff that takes forever to build where a binary would be convenient. Flags and optimizations aren’t that important for KDE frameworks or Firefox.

    Offering binaries is a really nice middle ground. Gentoo makes it so easy to build custom packages from source but it’s always been all or nothing. I don’t want to wait 2-3 hours building updated libraries or Firefox every time there’s a patch.

    Personally, I would be interested in a distro that had binary packages, easy builds like Gentoo and something like Arch’s AUR.





  • Yeah, it works fine. You might want to tinker with the packages as others have suggested but it’s exactly what you expect from Fedora. The only difference is it’s Plasma instead of GNOME.

    I had the same experience with GNOME on the family computer. I had to add extensions to make it more accessible. Then when they auto update you get dumped into vanilla GNOME until you log out and back in to re-enable extensions. I would get called over every time that happened. I switched it to Plasma and everyone is happy.

    One thing worth pointing out is the dash to dock/panel, just perfection and appindicator GNOME extensions are all in the Fedora repository. When you install them from there, you don’t get that janky behavior during updates where you have to re-enable them. Those extensions go a long way towards making GNOME more accessible to users coming from Windows or Mac. Default GNOME is great if you use keyboard shortcuts but it’s not very intuitive when you’re starting out.


  • Joker@discuss.tchncs.detoLinux@lemmy.mlFlatpack, appimage, snaps..
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    It’s been that way since the dawn of computing. Developers will push hardware to its limits and the hardware people will keep making a faster chip. A lot of software was laggy as hell back in the day. Not to mention, it didn’t have any features compared to the stuff now. Plus our shit would crash all the time and take down the whole PC. Sure, you run across some shockingly fast and good apps but those have always been few and far between.





  • In all fairness, 13 days is a fairly quick turnaround for patching in the enterprise. The breach was only 6 days after disclosure. They were almost certainly in the planning stages already when this happened.

    I used to be the head of IT in a large organization that worked with clients in highly regulated sectors. They all performed regular audits of our security posture. Across the board, they expected a 30 day patch policy. For high profile vulnerabilities like this one, they would often send an alert and expected imminent action within a commercially reasonable time frame. We would get it done anywhere from 24 hours to days later depending on the situation and whether there were complications. It was usually easy for us because we were patching every device and application on the network every couple weeks anyway. A hotfix is much easier to deploy when everything is up to date already and there are no prerequisite service packs. We knew we were much faster than most and it took a lot of work to get there. Thirteen days is a little slow for a 0-day by our standards but nowhere near unreasonable.

    The reality is many enterprises don’t patch at all or don’t do it completely. They may patch servers but not workstations. They may patch the OS but not the applications. It’s common to find EOL software in critical areas. A friend of mine did some work for a railroad company that had XP machines controlling the track switches. There are typically glaring holes throughout the company when it comes to security. Most breaches go unreported.

    Look, I hate Comcast as much as anyone. They suck. But taking 13 days to patch isn’t unreasonable. Instead, people should be asking why there weren’t other security layers in place to mitigate the vulnerability.




  • This is wild. I almost wonder if it’s actually a real thing or an elaborate hoax. It’s impressive in either case.

    As far as the concept of AI news, there are obvious drawbacks but also some advantages. In particular, the anchors are less animated and emotional, which eliminates quite a bit of bias. Cable news anchors with their incredulity, snide remarks, and expressions have done a lot to help ruin the news. That alone can easily undermine a story or a guest in a way that causes the audience to pick a side.

    The idea of using AI to scour public records and create stories is another really cool idea. There’s so much out there and not enough reporters with the time or inclination to investigate everything.

    I’m not too keen on the AI generated imagery, although traditional news outlets essentially do the same thing. It’s a dangerous thing to be presented with artificial pictures and videos in a news format. Before long, you can’t distinguish between reality and artificial, which is more or less the same problem a significant portion of the country has had since the 2016 election. In that case, they were mostly fed stupid memes and fabricated stories on social media. This is a completely different level. In the wrong hands, this is a weapon of mass destruction.


  • This whole thing stinks. It’s the kind of lawsuit where you wish both parties could lose. The whole walled garden concept sucks, but this doesn’t exactly benefit consumers. Nobody wants a dozen different app stores where we need to set up accounts and payment info - not consumers and not small to medium size developers.

    If Epic gets what they’re asking for it sure as hell won’t be what they want. Google still controls the OS so they can just make some shitty third party app store API with requirements just as onerous as IAP that puts everyone else at a disadvantage. If I’m Google, my new motto is “Android’s not done until Fortnite won’t run”.



  • On the surface, the biggest difference between distros will be the package manager and the update cadence. Most package managers are generally comparable so I won’t get into that. The cadence has to do with release type - rolling or fixed - and the speed with which updates are released. Do you want the newest packages, LTS or somewhere in the middle? This is probably the first big decision to make when choosing a distro. The only real must-have here is you want a distro that provides timely security updates. Even a highly stable LTS should be pushing out security updates asap.

    Then you have default package choices, which are often superficial like DE or default apps. This can all be changed so it’s not much of a concern. But there could also be more impactful choices like whether a distro uses systemd or glibc vs musl. The mainstream distros tend to use systemd and glibc, which is generally good, but know that you have other options if your specific use case requires it. There’s also package availability, meaning the number of packages available in the repository, although this is less important than it used to be because you have options like Flatpak or Nix for getting packages that aren’t in your distro’s repository.

    There are also some distros created with a specific use case in mind, such as Alpine for containers or Kali for testing network security.

    Finally, you have structure and governance. Some distros have corporate backing, others are community supported and still others aren’t much more than a hobby. The ones with corporate backing typically have options for paid support. In general, you want something with stable and competent governance where it will continue to thrive even as team members change. You can find examples of this in corporate-backed distros as well as community distros.

    So your biggest choices are going to be cadence, structure/governance, and whether you may need paid support now or in the future.

    As for what distro developers actually do… First, they build the tooling and infrastructure to make their distro work - package manager, packaging tools, repository, etc. Then, they are responsible for packaging everything available in the distro. They are pulling in source code for all these apps, compiling it and putting binaries in the repository. They rebuild packages as required when there are updates to the source code. Some distros like Arch will build vanilla packages, meaning they don’t make changes to upstream code. Others may apply their own patches for various reasons. Some like Red Hat will provide patches to upstream apps requested by customers as part of their paid support services. So let’s say something isn’t working the way you need it for some random FOSS app included with the distro. You can put in a request and they will change it for you.

    As for your specific question about simulating Ubuntu on Fedora, that is not possible. They each use their own distinct package manager and repository. They generally have similar packages, but they are not interchangeable. However, there are tools like distrobox and distros like VanillaOS that have mechanisms for using another distro’s packages. These use containers under the hood so it’s not quite the same as just installing .deb on Fedora or .rpm on Ubuntu.




  • For starters, consider another distro if you want to make things easy on yourself. Alpine is probably a poor choice unless you have a reason to use it. I guess you could use it as a desktop if you really want to, but it’s more geared for containers and embedded devices. It uses musl instead of glibc so you will have problems running software that isn’t packaged for Alpine. The issue with Puppy is you will have a hard time getting help when you need it because it’s kind of a niche distro.

    For your first time, you’re better off using something more mainstream. You are going to run into some issues and it’s a lot easier finding solutions for popular distros. Debian would be a fine choice because it’s widely used and runs great on older hardware. Beyond that, you could look at Ubuntu, Fedora, PopOS and Mint.


  • Why wouldn’t he have to pay? He already had to pay when he bought it. He put in cash and took on debt. The only real benefit to this thing tanking is he can take a massive capital loss and probably never pay income tax again.

    The downside is he would owe a lot of money to people you don’t want to owe and he looks like a bigger moron by the day. Sooner or later, Tesla investors are going to get spooked by his deranged behavior and then he’s in a world of hurt. All his wealth is tied up in Tesla and he uses his position as CEO to pump the stock with all his bullshit lies. It’s like a giant Ponzi scheme. As soon as the value comes back down to Earth and the stock is priced like a normal company he’s in big trouble.