

“Since the dataset isn’t 100% perfectly annotated for analysis, we should give up the whole project entirely.”
“Since the dataset isn’t 100% perfectly annotated for analysis, we should give up the whole project entirely.”
The 102GB includes pictures then? That’s insane.
As of 2025, the English Wikipedia has 63,040,591 pages. The current text content in all its pages is about 156 GB in size. When counting all the revisions in histories, the size is 26,455 GB (26 TB).
I’m sure many people have already archived this at the least. Or snapshots, at a minimum. Not sure if that’s uncompressed, but if so, compression would save a shitload of that.
I see two new features that look fantastic, but the rest of the UI seems likely unchanged. I’ll definitely give it a shot though.
I’m doing it with a jellyfin client to my friend’s jellyfin server.
GIMP is unfortunately not a good competitor, the UX/UI is atrocious, and that’s after spending 25 years using it now… I switched to Krita for most things at this point. GIMP needs some sort of revamp.
Why would you spam in this thread?
deleted by creator
Collection of personal data is arguably worth money to them though, for advertising and whatever else they’re doing.
A lot of their chips are fab’d in the US and Israel and Germany and others though. It’s weird that nobody has mentioned all their US fabs. The new ones coming up in Ohio shortly (construction has been going already) will be two next-gen fab plants.
Does Intel make its main CPUs in China for those high tariffs?
Looked it up and found this info at least:
Key US Locations:
Arizona (Fab 52 and 62), New Mexico (Fab 9 and 11x), and Oregon (Hillsboro) are major Intel manufacturing hubs in the US, with the new Fab 42 and 32 also being part of a larger campus in Arizona. Ohio is also a major site with construction well underway for two new leading-edge chip factories.
Global Footprint:
Intel also has manufacturing facilities in locations like Israel (Jerusalem, Kiryat Gat) and Ireland (Leixlip).
Expansion and Future:
Intel is actively expanding its global network with new fabs in Ohio, Germany, and other locations, according to Intel Newsroom and plans to make the German fab one of the most advanced in the world.
That would be pretty nice. Our plates are expensive over here (US) so we just put a new tiny year sticker on each time and keep the plates for a long time.
I just used that for export, but I have yet to try import on it. But I’m assuming it works well, it has good reviews as far as I remember.
Ren from Ren and Stimpy?
https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
All free and available to everyone.
In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
They’re fighting harder for non-citizens than citizens at this point it seems. Not entirely sure why.
deleted by creator
86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.