![](/static/undefined/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/5170ed37-415d-42be-a3e7-3edd79eda681.png)
That kind of data sanitization is just standard practice. You need some level of confidence on your data’s accuracy, and for anything normally distributed, throwing out obvious outliers is a safe assumption.
That kind of data sanitization is just standard practice. You need some level of confidence on your data’s accuracy, and for anything normally distributed, throwing out obvious outliers is a safe assumption.
Maybe my taste buds are just broken, but for me, candy has always been either very sour for a very short time, or slightly sour all the way through. I’ve never had anything be very sour all the way through.
A “server” is just a remote computer “serving” you stuff, after all. Although, if you have stuff you would have trouble setting up again from scratch, I’d recommend you look into making at least these parts of your setup repeatable, be it something fancy ala Ansible, or even just a couple of bash scripts to install the correct packages and backing up your configs.
Once you’re in this mindset and take this approach by default, changing machines becomes a lot less daunting in general. A new personal machine takes me about an hour to setup, preparing the USB included.
If it’s stuff you don’t care about losing, ignore everything I just said. But if you do care about it, I’d slowly start by giving from the most to least critical parts. There’s no better time to do it than when things are working well haha!
58% goes to fundraising, administrative and technological costs. The rest has some money going towards, but no limited to, other programs.
Only thing I can find in their financials that would maybe qualify as “random outreach” would be “awards and grants”, at 26mil last year out of 185mil revenue, or 14%.
https://meta.m.wikimedia.org/wiki/Grants:Programs/Wikimedia_Community_Fund
As far as I can tell, it’s not particularly random.
Maybe I’m missing something?
Tramp is more featured, but if all one cares about is being able to edit remote files using a local editor, vim can edit remote files with scp too: scp://user@server[:port]//remote/file.txt
I tried tramp-mode at some point, but I seem to remember some gotchas with LSP and pretty bleh latency, which didn’t make it all that useful to me… But I admittedly didn’t spend much time in emacs land.
Really bigger updates obviously require a major version bump to signify to users that there is potential stability or breakage issues expected.
If your software is following semver, not necessarily. It only requires a major version bump if a change is breaking backwards compatibility. You can have very big minor releases and tiny major releases.
there was more time for people to run pre-release versions if they are adventurous and thus there is better testing
Again, by experience, this is assuming a lot.
From experience shipping releases, “bigger updates” and “more tested” are more or less antithetical. The testing surface area tends to grow exponentially with the amount of features you ship with a given release, to the point I tend to see small, regular releases, as a better sign of stability.
I’d love to share your optimism, especially regarding that last sentence. As long as Google controls the most popular web browser out there, I don’t see the arms race ever stopping, they’ll just come up with something else. It wouldn’t be the first time they push towards something nobody asked for that only benefits themselves.
Your first hint that this is a naive take is that you’re brushing off a societal issue to a single, external factor.
I do connect to VMs and containers all the time, I just don’t see a reason not to speed myself up on my own machines because of it. To me, the downside of typing an alias on a machine that doesn’t have it once in a while, is much less than having to type everything out or searching my shell history for longer commands every single time. My shell configs are in a dotfiles repo I can clone to new personal/work machines easily, and I have an alias to rsync some key parts to VMs if needed. Containers, I just always assume I don’t have access to anything but builtins. I guess if you don’t do the majority of your work on a local shell, it may indeed not be worth it.
I’d rather optimize for the 99% case, which is me getting shit done on my machine, than refuse to use convenient stuff for the sake of maybe not forgetting a command I can perfectly just look up if I do legitimately happen to forget about it. If I’m on a remote, I already don’t have access to all my usual software anyway, what’s a couple more aliases? To me this sounds like purposefully deciding to slow yourself down cutting paper with a knife all the time cause you may not have access to scissors when you happen to sit at someone else’s desk.
Music (and other art forms) happen to trigger our brains to shoot the same happy/sad/etc chemicals other less abstract physical experiences do, for reasons we don’t completely understand. I’m utterly confused why being aware of them, or having the curiosity of wanting to learn more about it, is “what’s going wrong with society”. If anything, curiosity is one of the main things that kickstarted us as a species, and brushing it off to some abstract “deeper layers of human existence” like it was some sorcery we shouldn’t dare try to understand would be way more concerning about our state as a society. As for the completeness of this particular theory… I mean, we are on /c/showerthoughts after all.
Jazz has patterns and repetition, like any interesting music genre. If it didn’t, it’d be called noise. They just aren’t as in your face and predictable as the ones employed by pop genres.
Polyrhythms and polymeters are still patterns. They’re often harder to perceive and follow than your typical 4/4, but we’re still searching for the beat and bobbing our heads to the complex patterns it creates.
That’s not “self hosting” related tho lol
Oh, that’s for sure. The thing is, you need to be open to the idea that there could be contradictions to realize they are there. If you approach your readings already believing that you are a mere sinner who, in the end, can’t really understand God’s Plan™, it gets easier to brush off the inconsistencies.
That’s why I said “as a general rule”. I’m not sure I would consider fundamentalists to be representative of your average Christian - their whole thing is Biblical literalism, after all… I was raised Catholic, in an era where we still had religious courses in school, and I can pretty safely say that pretty much nobody read it outside the bare minimum they had to for First Communion/Confirmation/wedding prep.
It desperately needs interface types if we ever hope to make it a serious contender for general purpose web development. The IO overhead of having to interface with JS to use any web API is itself pretty slow, and is limiting a lot of usecases.
Considering the community we are on, I assumed the criticism was more about the privacy problems surrounding the engine and browser security model than the quality of the language itself. If that was the intent, I mean… Yeah, its weak typing is a fucking mess.
We do get what you mean (extremely condescending and reductive take, if you ask me). I was thinking rigidly along the lines of data engineering, as this is, well, a data engineering problem… There just isn’t 30% of people doing this on Google captchas, and this isn’t a “take”, just a reality of the scale and amount of people interacting with Google products. Have fun all you want, you do this, your data most likely gets thrown out, that’s all.
We’re still talking about image recognition, aren’t we? This feels like a general commentary on how Big Tech sees their customer base, which I don’t disagree with, but in my mind was just another discussion entirely…