Surprisingly legible, but feels like I can only read it with momentum, flitting past it and letting my subconscious tell me where the word breaks are. The moment I get confused and look more closely, it becomes almost impossible to read.
Surprisingly legible, but feels like I can only read it with momentum, flitting past it and letting my subconscious tell me where the word breaks are. The moment I get confused and look more closely, it becomes almost impossible to read.
Honestly, I like this idea, just because it means I could block your instance in my app and instantly filter out that kind of content, just like how someone can block lemmynsfw to get rid of almost all porn.
Not much of an addition, but you’re absolutely right, in most systems that are expected to be highly available, there’s standard maintenance times, an agreement in place, and no critical use of the system is permitted to be scheduled in that regular time period. Any deployments are limited to that window, in case a rollback is necessary, data sync, etc.
All of that is in addition to the type of high availability stuff you’re describing.
Very true. As someone who likes the all feed as a decent way to find new communities and just generally see more content, it’s been a lot of using the “Block Instance” button, and I have NSFW turned off, there’s still an abundance of Lemmynsfw celeb type content. I won’t even consider enabling NSFW until we get that functionality.
Agreed. The upload schedule has been a holy grail within LTT for a long time, and I truly believe it’s the root of all of this, yes, even the sexual harassment. Or at least how that harassment was handled so poorly. When do you have time to make good HR policies? Pull people into HR for reprimanding? Have opportunities for others to second guess decisions? Do training? Or heck, even just have less tired and irritable people making in-the-moment stupid decisions?
This uncompromising maximum velocity hurts everyone, and I hope they keep never bring it back to this pace, even after the process improvements they have planned.
Exactly the mistake threads just made, trying to capitalize on twitter’s rate limiting fiasco. The “general public” is extremely fickle, and Reddit will give us more opportunities.
I’m not sure I understand your position here, because voting is such a minor part of the system. A troll that only trolls by upvoting and downvoting isn’t much of a threat, unless they’ve got a dozen alt accounts or a botnet, both of which are different situations that should be handled differently. “The definition of a troll” is ridiculous hyperbole.
And as far as bans are concerned, that’s a moderation problem, not your role as an individual. I’ve never suggested votes should be completely untraceable, that’d be patently ridiculous and remove the ability to actually handle vote manipulation. Moderators and admins should obviously have that access, as I’ve asserted in this thread.
I’m also not advocating my votes be anonymous, I’m fine with having them public on my page. That alone gives you the complete ability to make a judgement about me as a person, or whatever it is you want to do with that. What I’m suggesting is that a user who’s just been downvoted shouldn’t have a trivial way of linking it to the individual who downvoted them in order to harass them.
Frankly, the impression I’m getting is that you’re not actually paying much attention to the case I’ve made, and are instead just using my comments as a platform to have a completely different argument that you’re passionate about. That’s the ONLY way that you could have missed my point so entirely, and come to the conclusion that I could ONLY be a troll or a moron.
It’s not being a troll, downvotes are part of the system for a reason: suppressing toxicity. If you downvote a toxic comment to push it down in the algorithm, there shouldn’t be a risk of that toxic person deciding they have a grudge and attacking you personally. Otherwise you risk downvotes not being used for their intended purpose, and an overall more toxic environment.
Yeah, having it on your user page is much less dangerous, imo. Still a possibility of getting called out if you downvote someone you’re arguing with, but you’re already in the comments there.
The only way I see a problem is if someone writes a bot or extension that reads the user profile into something “per comment”, and if that gets enough traction and use to build up a strong database. However, in that case, I’d imagine the Lemmy devs would build a feature to let instance admins hide that information from regular users.
Oooh, good point. As an admin/moderator feature, that’s a much better idea.
Hmmm, tbh, I don’t think that’s a feature I’d want. Every now and again you see “that guy” furious that he’s getting downvotes, doubling down and trying to start an argument or something. I don’t need that guy showing up in my DMs.
Eh, there’s a lot of valid things to be skeptical about. Using these tools as a DM is fundamentally different from using them as a massive corporation, as you’re not considering replacing your team of talented artists and writers to cut costs.
That said, done right, I also think this could be amazing. Legally train these models on the wealth of historical D&D art, and provide it to DMs to use during their campaigns to make maps, art for places the DM is describing on the fly, all of these things that no artist could possibly make because these locations are being invented on the fly as the players throw a skilled DM curveballs. D&D feels like an ideal “problem” for a lot of the “solutions” AI has to offer.
I also feel like a lot of the value of chronological is lost if I think it’s algorithmic recommendations. If I don’t know I’m browsing the latest? I’ll likely just think the algorithm is serving up some garbage. Especially somewhere like Facebook, where people haven’t really been curating their feed for years, just… following whoever to be polite and letting the algorithm take care of it.
Whoops
Well, happy you’re here!
Whoops
Well, happy you’re here!
Yeah man, same boat. I actually do have NSFW disabled, but I’ve still blocked at least a dozen or two lemmynsfw communities for actresses, celebs, gentlemenboners, ladyboners, ladyladyboners, just a metric ton of SFW porn communities.
Would make things so much easier if I could outright block the instance, and heck, maybe I’d even turn on NSFW in that case.
Yeah, I’ve realized I mostly want “social media” as a place to create discussions. For that, honestly, the smaller community size is perfect.
I find massive communities have a way of devolving into hive minds. Once you reach a critical mass of people who think one thing, any comment to the contrary is just… obliterated, whether by an exhausting amount of argument, or downvotes. And then it just becomes known that that’s the opinion of the community, and people stop even bringing it up. At least that’s my theory on how it happens.
Over here, with a smaller community size, I’m finding a lot more genuine conversation, no matter the topic. It’s awesome. And I’m still finding Lemmy large enough to bring me interesting links and memes to talk about.
Biggest mutant like this I ever made was a government requirement to export PDFs. Best way I could find to make PDFs from PHP was a library called wkhtmltopdf. Which, as the name suggests, converts html to pdf.
Installed a library to let me call a local install of wkhtmltopdf on the command line of the host machine. Wrote a ridiculous HTML template, with all kinds of weird styling and jank to support the older version of WebKit that wkhtmltopdf used, and then would save the output as a file. Then I would run wkhtmltopdf with that file as an argument.
Of course, I wasn’t done here. They required that I use their existing title page, appendices, etc. Only the data in the middle was to change. So I added a whole “PDF Data” table to the database, with storage locations for them to upload something like 10+ PDFs to append at the front and back of the PDF. Did I mention this whole thing supported two languages?
So then I implemented another command line library, called pdftk, or pdf toolkit. I used a crazy call to pdftk to append all of these to the front and back of the document, making these look like what they wanted. Save to that same folder, send the file to the client through PHP, and use my “command line from PHP” wizardry to rm all the files I’d made in my “cache” folder, as I called it.
But of course… we’re not done. Turns out appending files like this horribly breaks the PDF table of contents, which was apparently just using page numbers, not any kind of actual linking. Enter pdftk again, and now I’m running it before generating my HTML, on each and every PDF I’m going to add, to get the page count, and saving that value.
I’d then pass this crazy dictionary into my template and add “fake pages” to the start and end, with headings, and a special margin that wkhtmltopdf interprets as a page break. This even works to add my “additional documents” to the table of contents. Now, my pdftk append commands also deliberately trim the PDF, so as to replace the fake pages, keeping the page count the same, so the links work.
So close… but it turns out wkhtmltopdf doesn’t account for when the table of contents is so ridiculously long that it goes on for more than a whole page. Did I mention these PDFs are more than 300 pages long in many instances? Suddenly every link goes to the page after the one it’s supposed to, or even a couple after. Not good.
Yeah… this is the beginning of a nightmare where I add fake table of contents pages that I cut out later with pdftk. Which means I have to somehow know in advance how many pages the ToC will be… estimation time. That’s right, I run through all the data I generate the ToC with in advance, and count the number of entries I’m going to be adding to the ToC, and, by literally counting the entries on a full ToC and saving that as a magic number, guess how many pages there will be.
Oh, but what if a line is too long, and wraps to two lines in the ToC? Well, guess who counted the number of characters in a line to produce another guesstimate? No, neither of these heuristics were perfect, and they looked like a spaghetti mess, but with enough tinkering, I got numbers that worked on everything I tested.
And there you have it, 300+ page PDFs generated from the database, with all the title pages and such that they manually uploaded, in two languages, with a working table of contents. During my time, we never even added a cache to this monstrosity, it did all this every time the user clicked “Download PDF”. Took around 30 seconds, and the UI just pretended it was a really big file.
What a wild project, probably the biggest spaghetti mess I’ve ever written. But hey, actually met all the requirements, no matter how ridiculous, and I’m proud of that monstrosity. Probably still in use today.
Thank you! I’ve just been browsing with NSFW turned off, but: A) I actually would rather turn on the blur function if there wasn’t literal porn throughout the “all” feed. B) A bunch of mild soft core stuff like “pretty women” and “celebs” gets through anyway.
Can’t believe it never occurred to me to use the block button to shape the all feed.
Legally responsible, for one.
I.E. If a federated instance hosted pedophilia, that content would be copied to, and served by, your instance’s infrastructure, which is obviously legally problematic.