• 0 Posts
  • 83 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle
  • They’re conservative. The whole name is based on the principle that they want to maintain the old way rather than progress. I think it stems from fear of a changing world. The old world with the old rules provided safety, it was understandable, the rules were clear, and the rules didn’t hurt them. Now some people are “attacking” their world, their rules, everything that offers them safety and understanding. So they feel attacked.

    It’s the same thing, but with another subject every time. Whether it is women getting rights, which threatens their safe world with clear gender roles. Or gay people, who threaten the simple rules like “boys love girls”, “in order to be successful, get a job, marry, and get kids”. Or non-white people getting rights. What if they vote for things that “we” don’t want? What if “they” ruin the world that “we” got so used to.

    Trans and especially non-binary people are just the next group in line that threatens their simple world. When men are people born as men and women are people born as women, it’s way easier to force people into the traditional roles. The old rules still work, “boy marries girl, gets kids”. And when they speak out about their "concerns* they are (rightfully) called out for it. So they become defensive and start doing whatever they’re doing now.


  • Oh I’m sure there are. I see more and more idiots with big American trucks here in the Netherlands. They completely don’t fit in our cities designed for normal sized cars. I also don’t see how they’re considered safe. The top of their hood is so high that you’ll mostly get hit by the grill on the front upon impact.

    I also doubt most of those people really need one, they seem more like the type of people that compensate away their insecurities by having a big truck. I can sort of see a farmer or something using one, but in a city I don’t understand it. I guess they’re not banned because that’d upset the US or something.





  • Manjaro. I had previously already used Antergos and Ubuntu, but after Antergos stopped I needed something like it. So I installed Manjaro in my secondary PC (with old components). I constantly got into trouble with the manual kernel version selection thingy. I was used to kernel updates being part of the normal update process, and suddenly I had to manually pick the new one. I constantly ran into incompatibility issues with older or newer kernels, vague update deadlocks where I couldn’t update things because they depended in each other, and I absolutely hated having to use a separate program for updating the kernel. Now the PC runs Fedora and I’m liking that a lot more so far…


  • For tasks that I know, I’m faster in the terminal. For tasks where I’m less familiar or that are very important (like disk partitioning) I prefer a GUI because with a GUI I can usually see a bit better what I’m doing.

    Terminal tasks for me include copying stuff, setting folder permissions, uncompressing or compressing folders, quick edits in vim, etc.





  • Generating meaningful text in an image is very complex. Most of these models like Dall-E and simple diffusion are essentially guided denoising algorithms. They get images of pure noise, and are being told that it’s actually just a very noisy image of whatever the description is. So all they do is remove some noise for many steps in a row until a clear image emerges. You can kinda imagine it as the “AI” staring into the noise to see the image that you described.

    Most real-world objects are of course quite complex. If it sees a tree branch in the noise, it also need to make sure that the rest of the tree fits. And a car headlight only makes sense if the rest of the car is also there. But for text these kind of correlations are even way way harder. In order to generate meaningful text it not only needs to understand how text is usually spaced, and that letters usually are written in a consistent font, it also needs to learn the entire English language. All that just to generate something that is probably overall of less influence to it’s “score” on images form the dataaset than learning how to draw a realistic car.

    So in order to generate meaningful text, the model requires a lot of capacity. Otherwise, since it’s not specifically motivated to learn to write meaningful text, it’ll do whatever it’s doing now. Honestly I’m sometimes quite impressed with how well these models do generate text, given all these considerations.

    EDIT: Another few things came to mind:

    • Relating images and text (and thus guiding the image generator) was in the past done using a different (AI) model. Not sure if that’s still the case. So 2 models need to understand the English language to generate meaningful text: generator and the image to text translation model.

    • So why can AI like ChatGPT generate meaningful text? Well in short, they are fully dedicated to outputting language. They output the text as text and thus can be easily scored on it. The neural network architecture is also way more suited to it and they see way more text







  • I’m not sure if the writer of this article is familiar with Dutch politics, but nothing unusual is happening. Unlike in the US, where one party wins am absolute majority, we have a system where you need to form a coalition. And this is obviously a whole political game, with parties kinda pretending they don’t want it just to have a better bargaining position. This formation process has taken months for as long as I can remember, and personally I don’t feel like it’s going particularly badly for them.

    I still think the coalition of PVV, NSC, VVD, BBB will happen in some way or another. But the other parties do want the PVV to make clear that some of it’s plans are just not going to happen. Wilders is aware of this, and suddenly seems a lot more reasonable than I’m used to (though obviously still far-right). Ideas like a Nexit or a full immigration stop are just not executable, so they’ll have to be toned down.


  • Baudet is definitely on the edge yeah. Obviously there somes a moment where you shouldn’t tolerate the intolerant and a democratic society shouldn’t protect the undemocratic in the name of democracy. Up until now I don’t think either of them has done enough to deserve that tho. Randomly calling for a politician to be killed, even as a joke, is in my opinion still bad taste. It’s not the way I’d want Dutch politics to go, even for those who I disagree with. Violence and hate only creates more violence and hate


  • This is not a joke that is very appropriate though. In the early 2000s a politician similar to Wilders was killed for his opinions. Another (even further) right-wing politician was assaulted twice in the past few months, and Wilders himself hasn’t been able to go anywhere without bodyguards in the past few years. We’re still a democracy, Wilders should be able to say things without getting constant death threats. Unlike the US, things are not yet as polarized here. To me it feels like the comedian made a very inappropriate “joke”, even if I also absolutely despise Wilders


  • Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.