• 1 Post
  • 21 Comments
Joined 2 years ago
cake
Cake day: May 8th, 2023

help-circle

  • That’s generally not recommended as a way of stripping them though, since the coating is often made of polyurethanes, which release alkyl isocyanates (highly toxic) when heated strongly. While a small amount in a well-ventilated area might not be enough to give you any problems, if you get too much it is very bad. The organic material will also impact the ability to solder. Better to scrape it off first.


  • I think 3 years is probably about right. I don’t think their modus operandi is quite a classic Microsoft style Embrace/Extend/Extinguish, probably just Embrace/Extinguish, the Extend isn’t really necessary. The point is to leverage an open protocol to build a walled garden; embrace early on so your early adopters have content to interact with from the rest of the community, overcoming network effects of the fediverse having more content than them, and then extinguish once they have critical mass to pull the ladder up and leverage network effects against the fediverse. We’ve seen this happen before with Facebook Chat and XMPP; it took 5 years with XMPP (embrace Feb 2010, extinguish April 2015). Network effects might be slightly greater with chat than with fediverse content, so discounting below 5 years is probably sensible (although it depends on how well fediverse does, and their success of cross-promoting it from Instagram and Facebook to get critical mass).



  • more is a legitimate program (it reads a file and writes it out one page at a time), if it is the real more. It is a memory hog in that (unlike the more advanced pager less) it reads the entire file into memory.

    I did an experiment to see if I could get the real more to show similar fds to you. I piped yes "" | head -n10000 >/tmp/test, then ran more < /tmp/test 2>/dev/null. Then I ran ls -l /proc/`pidof more`/fd.

    Results:

    lr-x------ 1 andrew andrew 64 Nov  5 14:56 0 -> /tmp/test
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 1 -> /dev/pts/2
    l-wx------ 1 andrew andrew 64 Nov  5 14:56 2 -> /dev/null
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 3 -> 'anon_inode:[signalfd]'
    

    I think this suggests your open files are probably consistent with the real more when errors are piped to /dev/null. Most likely, you were running something that called more to output something to you (or someone else logged in on a PTY) that had been written to /tmp/RG3tBlTNF8. Next time, you could find the parent of the more process, or look up what else is attached to the same PTS with the fuser command.


  • I always thought of Raspberry Pi as a not-for-profit and supported it on that basis. If the model was supposed to be like Mozilla where they have a not-for-profit and a corporation that is wholly owned by the not-for-profit, then it seems like selling out the corporation to for-profit investors runs contrary to the goals of the not-for-profit. Does anyone know why they are allowing the corporation to be sold off?




  • Data being public (and privacy in general) shouldn’t be ‘all or none’. The problem is people joining the dots between individual bits of data to build a profile, not necessarily the individual bits of data.

    If you go out in public, someone might see you and recognise you, and that isn’t considered a privacy violation by most people. They might even take a photo or video which captures in the background, and that, in isolation isn’t considered a problem either (no expectation of privacy in a public place). But if someone sets out to do similar things at a mass scale (e.g. by scraping, or networking cameras, or whatever) and piece together a profile of all the places you go in public, then that is a terrible privacy violation.

    Now you could similarly say that people who want privacy should never leave home, and otherwise people are careless and get what they deserve if someone tracks their every move in public spaces. But that is not a sustainable option for the majority of the world’s population.

    So ultimately, the problem is the gathering and collating of publicly available personally identifiable information (including photos) in ways people would not expect and don’t consent to, not the existence of such photos in the first place.



  • Phones have a unique equipment identifier number (IMEI) that they share with towers. Changing SIM changes the subscriber ID (IMSI) but not the IMEI (manufacturers don’t make it easy to change the IMEI). So thieves (and anyone else) with the phone could be tracked by the IMEI anyway even if they do that, while leaving the phone on.

    In practice, the bigger reason they don’t get caught every time if they have inadequate opsec practices is that in places where phone thefts are common, solving them is probably not a big priority for local police. Discarding the SIM probably doesn’t make much difference to whether they get caught.


  • Here’s another source about 2 month wait times sometimes, if you don’t believe me: https://www.xda-developers.com/xiaomi-2-month-wait-unlock-bootloader/. It has never personally been 2 months for me, but it has been over a week before for me, and their support team refused when I asked nicely to shorten it despite the fact my daily driver phone was broken and I couldn’t restore my LineageOS from backup - I just had to wait. That’s why I don’t buy Xiaomi stuff any more.

    The wait time is determined by their servers, which sends a cryptographically signed certificate specific to the serial number of the device that the bootloader reads. The key to sign the certificate stays on their servers, and the client just calls to the server, and either gets a response saying to wait for this much longer, or containing the certificate. Xiaomi explicitly call it ‘apply for unlocking’ (e.g. see the title of https://en.miui.com/unlock/index.html), as in, they think it is their right to decide who gets to decide what runs on my hardware I’ve bought from them, and us mere consumers must come begging to them and ‘apply’ to unlock.

    You don’t even have to use it

    The bootloader is designed not to boot anything except MIUI without the certificate from the unlocking tool. While there are open source clients (like https://github.com/francescotescari/XiaoMiToolV2) they still work by calling Xiaomi’s server to get the unlock code, so if you want to run anything except Xiaomi’s MIUI (which is a bad idea from a privacy perspective), you kind of do have to use it (at least their server). The only way around it would be if someone found a vulnerability in the bootloader or the processor itself that allows for the ‘treacherous computing’ aspect of the boot to be bypassed without the certificate - and as far as I’m aware there isn’t a reliable approach yet for that.


  • Wait times are as high as 2 months (depending on how old the phone model is, etc…), and even as a regular Xiaomi customer, their support never seem to allow anyone to skip the wait, even if for example they broke their old phone and want to set up a new one like the old one (ask me how I know). During that period, MIUI is like a data collection honeypot, sucking up your PII and serving you ads.

    It might be ‘normal’ now to Xiaomi customers to wait to be able to unlock the phones that they have paid for and own (perhaps in the same sense someone in an abusive relationship might consider getting hit ‘normal’ because it has been happening for a while), but the idea that the company who sold you the phone gets some say on when you get the ‘privilege’ of running what you like on it, and make you jump through frustrating hoops to control your own device, is certainly not okay.

    If they just wanted to stop reselling phones with non-Xiaomi sanctioned malware / bloatware added, making the bootloader make it clear it is unlocked (as Google does, for example) would be enough. Or they could make a different brand for phones that are unlocked, using the same hardware except with a different logo, and let people choose if they want unlocked or walled garden.

    However, they make money off selling targeted ads based on information they collect - so I’m sure that they probably don’t want to do any of those things if they don’t have to, because they might disrupt their surveillance capitalism.


  • Xiaomi phones used to be good for custom ROMs, but now they try to stop you unlocking the bootloader by making you wait an unreasonable amount of time after first registering the device with them before you can unlock. Many of the other vendors are even worse.

    So from that perspective, Pixel devices are not a terrible choice if you are going to flash a non-stock image.


  • People contributed to HashiCorp products - the software is not something solely made by HashiCorp. This might technically be legal under their CLA and indeed even in the absence of the CLA, under the Apache License, but it certainly isn’t fair to people who contributed to it voluntarily in the expectation it would form part of a Free software project.

    I think maybe the best way to combat this type of thing in the future is if F/L/OSS communities (i.e. everyone who contributes to a project without being paid) starts: 1) preferencing copyleft projects over BSD/MIT type licenses, and 2) refusing to sign any kind of CLA (maybe with an exception for obligate non-profit organisations). Then, companies will either have to pick developing entirely at their own cost, or to accept contributions on the incoming=outgoing model, meaning they are also bound by the copyleft licence and are forced to keep it as Free software. That would end the bait-and-switch of getting people to work on your product for free and then saying “surprise suckers, it’s no longer Free software!”.



  • There’s also the fact that GPL is ultimately about using copyright to reduce the harm that copyright can cause to people’s rights.

    If we look through the cases that could exist with AI law:

    1. Training can legally use copyrighted materials without a licence, but models cannot be copyrighted: This probably is a net win for software freedom - people can train models on commercial software even and generate F/L/OSS software quickly. It would undermine AGPL style protection though - companies could benefit from F/L/OSS and use means other than copyright to undermine rights, but there would be nothing a licence could do to change that.
    2. Training can legally use copyrighted materials without a licence, models can be copyrighted: This would allow companies to benefit heavily from F/L/OSS, but not share back. However, it would also allow F/L/OSS to benefit from commercial software where the source is available.
    3. Training cannot legally use copyrighted materials without complying with licence, models cannot be copyrighted (or models can be copyrighted, outputs can’t be copyrighted): This is probably the worst for F/L/OSS because proprietary software wouldn’t be able to be used for training, but proprietary software could use a model trained on F/L/OSS by someone else.
    4. Training cannot legally use copyrighted materials without complying with licence, models can be copyrighted, outputs can be copyrighted: In this case, GPLv2 and GPLv3 probably make the model and its outputs a derivative work, so it is more or less status quo.

  • I use Restic, called from cron, with a password file containing a long randomly generated key.

    I back up with Restic to a repository on a different local hard drive (not part of my main RAID array), with --exclude-caches as well as excluding lots of files that can easily be re-generated / re-installed/ re-downloaded (so my backups are focused on important data). I make sure to include all important data including /etc (and also backup the output of dpkg --get-selections as part of my backup). I auto-prune my repository to apply a policy on how far back I keep (de-duplicated) Restic snapshots.

    Once the backup completes, my script runs du -s on the backup and emails me if it is unexpectedly too big (e.g. I forgot to exclude some new massive file), otherwise it uses rclone sync to sync the archive from the local disk to Backblaze B2.

    I backup my password for B2 (in an encrypted password database) separately, along with the Restic decryption key. Restore procedure is: if the local hard drive is intact, restore with Restic from the last good snapshot on the local repository. If it is also destroyed, rclone sync the archive from Backblaze B2 to local, and then restore from that with Restic.

    Postgres databases I do something different (they aren’t included in my Restic backups, except for config files): I back them up with pgbackrest to Backblaze B2, with archive_mode on and an archive_command to archive WALs to Backblaze. This allows me to do PITR recovery (back to a point in accordance with my pgbackrest retention policy).

    For Docker containers, I create them with docker-compose, and keep the docker-compose.yml so I can easily re-create them. I avoid keeping state in volumes, and instead use volume mounts to a location on the host, and back up the contents for important state (or use PostgreSQL for state instead where the service supports it).


  • He does indeed have a history of paying his way into looking like a visionary and/or an engineer. He bought into Tesla in early 2004, it was founded in mid 2003.

    His comfort zone was convincing people to give him money for one really ambitious thing, and then using that money to achieve some other thing (that no one would have given him money for) that is sort of on the way, but which has commercial value to him.

    For example, he has repeatedly said his companies will deliver full self-driving cars by dates that have passed - and convinced investors to get him in a position to compete with companies like Toyota, promised a ‘hyperloop’ and got funding to compete with other horizontal drilling companies, promised to send people to mars and got to compete with other satellite technology companies.

    So making big promises paid off for him. For the investors, in terms of long term value, they might have been better off investing in existing companies he ended up competing with.

    But I suspect he is now outside his comfort zone, and might not even realise how far out of his depth he is.


  • The proposal doesn’t say what the interface between the browser and the OS / hardware is. They mention (but don’t elaborate on) modified browsers. Google’s track record includes:

    1. Creating SafetyNet software and the Play Integrity API that create ‘attestations’ that the device is running manufacturer supplied software. They can pass for now (at a lower ‘integrity level’) with software like LineageOS combined with software like Magisk (Magisk by itself used to be enough, but then Google hired the Magisk developer and soon after that was dropped) and Universal SafetyNet Fix, but those work by making the device pretend to be an earlier device that doesn’t have ARM TrustZone configured, and one day the net is going to close - so these actively take control away from users over what OS they can run on their phone if they want to use Google and third party services (Google Pay, many apps).
    2. Requiring Android Apps be signed, and creating a separate tier of ‘trusted’ Android apps needed to create a browser. For example, to implement WebAuthn with hardware support (as Chrome does) on Android, you need to call com.google.android.gms.fido.fido2.Fido2PrivilegedApiClient, and Google doesn’t even provide a way to apply to get allowlisted for (Mozilla and Google are, for example, allowed to build software that uses that API but want to run your own modified browser and call that API on hardware you own? Good luck convincing Google to add you to the allowlist).
    3. Locking down extension APIs in Chrome to make it unsuitable for things they don’t like, like Adblocking, as in: https://www.xda-developers.com/google-chrome-manifest-v3-ad-blocker-extension-api/.

    So if Google can make it so you can’t run your own OS, and their OS won’t let you run your own browser (and BTW Microsoft and Apple are on a similar journey), and their browser won’t let you run an adblocker, where does that leave us?

    It creates a ratchet effect where Google, Apple, and Microsoft can compete with each other, and the Internet is usable from their browsers running unmodified systems sold by them or their favoured vendors, but any other option becomes impractical as a daily driver, and they can effectively stack things against there ever being a new operating system / distro to compete with them, by making their web properties unusable and promoting that as the standard. This is a massive distortion of the open web from where it is now.

    A regulation that if hardware has private or secret keys embedded into it, hardware manufacturers must provide the end user with those keys; and that if they have unchangeable public keys embedded and require that software be signed with that to boot or access some hardware, manufacturers must provide the private keys to end users. If that was the law in a few states that are big enough that manufacturers won’t just ignore them, it would shut down this sort of scheme.