• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • Current LLMs are manifestly different from Cortana (🤢) because they are actually somewhat intelligent. Microsoft’s copilot can do web search and perform basic tasks on the computer, and because of their exclusive contract with OpenAI they’re gonna have access to more advanced versions of GPT which will be able to do more high level control and automation on the desktop. It will 100% be useful for users to have this available, and I expect even Linux desktops will eventually add local LLM support (once consumer compute and the tech matures). It is not just glorified auto complete, it is actually fairly correlated with outputs of real human language cognition.

    The main issue for me is that they get all the data you input and mine it for better models without your explicit consent. This isn’t an area where open source can catch up without significant capital in favor of it, so we have to hope Meta, Mistral and government funded projects give us what we need to have a competitor.



  • “I use Signal to hide my data from the US government and big tech”

    “Wait, you seriously still use Reddit? Everyone switched to the Fediverse!”

    “Wow, can’t believe you use Apple! Android is so much better.”

    No one who isn’t terminally online understands what these statements mean. If you want people to use something else, don’t make it about privacy and choose something with fancy buttons and cool features that looks close enough to what they have. They do not care about privacy and are literally of the mindset “if I have nothing to hide I have nothing to fear”. They sleep well at night.





  • Yeah there’s no way a viable Linux phone could be made without the ability to run Android apps.

    I think we’re probably at least a few years away from being able to daily drive Linux on modern phones with functioning things like NFC payments and a decent native app collection. It’s definitely coming but it has far less momentum than even the Linux desktop does.




  • For the love of God please stop posting the same story about AI model collapse. This paper has been out since May, been discussed multiple times, and the scenario it presents is highly unrealistic.

    Training on the whole internet is known to produce shit model output, requiring humans to produce their own high quality datasets to feed to these models to yield high quality results. That is why we have techniques like fine-tuning, LoRAs and RLHF as well as countless datasets to feed to models.

    Yes, if a model for some reason was trained on the internet for several iterations, it would collapse and produce garbage. But the current frontier approach for datasets is for LLMs (e.g. GPT4) to produce high quality datasets and for new LLMs to train on that. This has been shown to work with Phi-1 (really good at writing Python code, trained on high quality textbook level content and GPT3.5) and Orca/OpenOrca (GPT-3.5 level model trained on millions of examples from GPT4 and GPT-3.5). Additionally, GPT4 has itself likely been trained on synthetic data and future iterations will train on more and more.

    Notably, by selecting a narrow range of outputs, instead of the whole range, we are able to avoid model collapse and in fact produce even better outputs.



  • coolin@beehaw.orgtoMemes@lemmy.mlIt's Open Source!
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Based NixOS user

    I love NixOS but I really wish it had some form of containerization by default for all packages like flatpak and I didn’t have to monkey with the config to install a package/change a setting. Other than that it is literally the perfect distro, every bit of my os config can be duplicated from a single git repo.


  • I don’t know what type of chatbots these companies are using, but I’ve literally never had a good experience with them and it doesn’t make sense considering how advanced even something like OpenOrca 13B is (GPT-3.5 level) which can run on a single graphics card in some company server room. Most of the ones I’ve talked to are from some random AI startup that have cookie cutter preprogrammed text responses that feel less like LLMs and more like a flow chart and a rudimentary classifier to select an appropriate response. We have LLMs that can do the more complex human tasks of figuring out problems and suggesting solutions and that can query a company database to respond correctly, but we don’t use them.





  • The natural next place for people to go to once they can’t block ads on YouTube’s website is to go to services that exploit the API to serve free content (NewPipe, Invidious, youtube-dl, etc.). If that happens at a large scale, YouTube might shut off its API just like Reddit did and we’ll end up in scenario where creators are forced to move to Peertube, and, given how costly hosting is for video streaming, it could be much worse than Reddit->Lemmy+KBin or Twitter->Mastodon. Then again, YouTube has survived enshittiffication for a long time, so we’ll have to wait and see.


  • FediSearch I guess is similar to your idea, though I think the goal would be to make a new and open search index specifically containing fediverse websites instead of just using Google. I also feel like the formatting should be more like Lemmy, with the particular post title and short description showing instead of the generic search UI.

    The idea of a fediverse search is really cool though. If things like news and academic papers ever got their own fediverse-connected service, I could see a FediSearch being a great alternative to the AI sludge of Google.