I imagine that was part of it, but I doubt it’s the actual main reason. More of a post justification.
projectmoon
Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY
- 2 Posts
- 45 Comments
projectmoon@lemm.eeto Technology@lemmy.world•Gov. Landry signs new drone defense law; first in nationEnglish7·24 days agoOr if you just ignore federal courts, which seems to be the current fashion.
projectmoon@lemm.eeto Linux@lemmy.ml•What is your most useful Linux app which others might not know about (please don't just give the name but a link and why it is good for you) ?9·29 days agoRclone can do file mounts as well as sync.
projectmoon@lemm.eeto No Stupid Questions@lemmy.world•How does AI-based search engines know legit sources from BS ones ?13·1 month agoA lot of the answers here are short or quippy. So, here’s a more detailed take. LLMs don’t “know” how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.
projectmoon@lemm.eeto Free and Open Source Software@beehaw.org•You Can Choose Tools That Make You Happy4·2 months agoI feel like this article is exactly the type of thing it’s criticizing.
projectmoon@lemm.eeto Technology@beehaw.org•Duolingo will replace contract workers with AI11·2 months agoThe problem is that while LLMs can translate, it’s still machine translation and isn’t always accurate. It’s also not going to just be for that. It’ll be applying “AI” to everything that looks like it might vaguely fit, and it’ll stifle productivity.
projectmoon@lemm.eeto Android@lemdro.id•Introducing Octopi Launcher - Now in Open Beta!English8·3 months agoIs the code available somewhere?
projectmoon@lemm.eeto World News@lemmy.world•Scores killed in US strikes on Yemen fuel port of Ras Isa, Houthi officials sayEnglish7·3 months agoWell when Roosevelt was elected 4 times, it was actually legal back then. And he’s the reason why the 2 term limit amendment exists. But of course, that requires actually following the law, so…
projectmoon@lemm.eeto Technology@lemmy.world•Google created a new AI model for talking to dolphinsEnglish20·3 months agoThis is probably one of the best actual uses for something like generative AI. With enough data, they should be able to vectorize and translate dolphin language, assuming there is one.
Lol, there are smaller versions of Deepseek-r1. These aren’t the “real” Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).
For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.
For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it’s a 40 GB file. It’ll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that’d be 48 total GB and I could run the whole model in VRAM and it’d be very fast.
They’re probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you’ve got a server rack full of data center class GPUs, you’ll probably set your house on fire before it generates a single token.
If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There’s also OLmo2, but I haven’t really tested it.
Mistral small 24b also just came out and is Apache licensed. That is something I’m testing now.
Most open/local models require a fraction of the resources of chatgpt. But they are usually not AS good in a general sense. But they often are good enough, and can sometimes surpass ChatGPT in specific domains.
projectmoon@lemm.eeto Open Source@lemmy.ml•How to run LLaMA (and other LLMs) on Android.3·5 months agoIt’s enough to run quantized versions of the distilled r1 model based on Qwen and Llama 3. Don’t know how fast it’ll run though.
projectmoon@lemm.eeto Technology@lemmy.world•Thanks to Nvidia, there's a new generation of PCs coming, and they'll be running LinuxEnglish29·6 months agoDon’t know about “always.” In recent years, like the past 10 years, definitely. But I remember a time when Nvidia was the only reasonable recommendation for a graphics card on Linux, because Radeon was so bad. This was before Wayland, and probably even before AMD bought ATI. And it was certainly long before the amdgpu drivers existed.
For stuff like that, it’s best to have an auto formatter like checkstyle or something.
Had a team lead that kept requesting nitpicky changes, going in a FULL CIRCLE about what we should change or not, to the point that changes would take weeks to get merged. Then he had the gall to say that changes were taking too long to be merged and that we couldn’t just leave code lying around in PRs.
Jesus fucking Christ.
There’s a reason that team imploded…
projectmoon@lemm.eeto Fediverse@lemmy.world•Threads is making moves for Mastodon integrationEnglish17·2 years agoSo is there a way to follow someone on Threads now? Or at least get one’s instance to load a post? Where are the details of this beyond Zuckerberg’s post?
But wouldn’t you calculate the time in the future in the right time zone and then store it back as UTC?
I use Simple Login a lot too. But be careful, as some sites reject these email addresses. Or in the case of Shell Recharge, change their business logic to reject the email addresses without letting me change to another email … Haven’t been able to log in for months 🙃
I know. I have NodeBB as a backup.