

at minimum brits have source code. couple of eu countries make parts for it as well
at minimum brits have source code. couple of eu countries make parts for it as well
there’s a couple of big failures in american defense industry (like shipbuilding) but F35 is not one of them. the alleged killswitch is not likely a thing because first, it could be used by the most probable adversary, and they already shown capability in EW; and second, it’s not necessary because it requires constant stream of spare parts and maintenance. as if it’s worth it, ask any remaining iranian radar operator for firsthand opinion
some countries switched to euro made jets anyway, but these aren’t likely to be doing the job that F35 is cut to do anyway (SEAD)
there’s already euro alternative in development, for which americans were explicitly not invited (GCAP, FCAS). there’s also the everything else part of military, half of euro countries make now artillery (both tube and rocket) so these can be bought locally too
it’s not listed because this is not what is happening
italy for example put bridge construction in that military budget (as critical infrastructure)
also the subtext was “spend that defense budget in america” and this is not happening either for variety of reasons, so it’s partial failure already for them
Horseshit. USAF doesn’t need AIM-7 Sparrow or MIM-23 Hawk missiles, these aren’t even in service anymore. But Ukraine can use these (Hawk missiles can be used on Buk launchers)
maybe check from openai didn’t clear
this sounds more like another org with ties to a couple of conservative oligarchs https://en.wikipedia.org/wiki/World_Congress_of_Families
if it’s permanently installed high power device, it only makes sense to wire it directly, like it’s done with ovens, and EV charger draws even more power. why there’s even plug on the way there? it’s not gonna be moved anyway. and while we’re at, why you people don’t have three phase circuits for big loads like this? standard euro five wire three phase 32A circuit (or plug) gets you 22kW, that plug tops off at 9kW
80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they’re also the core of people trying to build their imagined machine god in openai and anthropic
it’s all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing
bro tried to recruit jihadists in roblox (failed) and now screenshots from this all are a matter of public record 💀
that was known almost decade before that https://en.wikipedia.org/wiki/Nth_Country_Experiment
it also helps if your air defense network doesn’t collapse immediately because it turns out that in order to guard these nukes you need also regular capable conventional military
Yeah, who else. Nuking Dresden at that point would be useless
you don’t have to choose a side and you can wish everyone involved a very nice visit to hague
either that, or nukes would be used first in korean war instead. imo it’s a good thing that nukes were first used against the most cartoonishly evil fascist state imaginable at that point
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]
Here’s Eyal’s “Hook Model”:
First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.
Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole
like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
For slightly earlier instance of it, there’s also real time bidding
taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don’t want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won’t go away (isn’t future bright?)
this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai
then, again people automate things they don’t want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don’t want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don’t like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like “no” or “your idea is physically impossible” or “there is no reason for any of this”. people who don’t like coding, vibecode. people who don’t like painting, generate images. people who don’t like understanding things, cram text through chatbots to summarize them. maybe you don’t see a problem with this, but it’s entirely a you problem
this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your “thinking” to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won’t repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there’s no junior to promote in their place, because junior was replaced by a chatbot.
this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i’m surely forgetting right now
and also
Is the backlash due to media narratives about AI replacing software engineers?
it’s you getting swept in outsized ad campaign for most bloated startup in history, not “backlash in media”. what you see as “backlash” is everyone else that’s not parroting openai marketing brochure
While I don’t defend them,
are you suure
e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)
e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can’t imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money
congratulations on offloading your critical thinking skills to a chatbot that you most likely don’t own. what are you gonna do when the bubble is over, or when dc with it burns down
It’s not. He-3 is supposed to be maybe one day used in fusion power, but we’re talking about tons of this stuff. Not only scale is off, also He-3 burning requires much higher temperature than D-T fusion, and this is just around in next 20 years pinky promise
People who think that it’s a big deal also take Ray Kurzweil seriously, it’s scifi noise
In practical terms, when DHS wanted to get He-3 neutron sensors, they bought out entire global supply for multiple years, for an application where only grams are needed and it’s not used up. It’s made from decay of tritium currently and it’d be less energy intensive to make it the usual way