• 1 Post
  • 164 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • Twitter operates servers in the EU. They will have at least Frankfurt server. Probably UK and probably elsewhere.
    It’s geographically closer, so reduces latency and server load (faster to complete a request, faster to discard allocated resources).
    It also gives redundancy. If Frankfurt DC explodes, the system will fall back to the next closest DC (probably London).

    So let’s say that the EU DC stops existing. And requests go over the ocean to the US.
    Twitter still has customers in the EU. They are still making money from EU citizens. Because twitter isn’t free. It costs money to manage, develop and run. Twitter tries to recoup those costs via adverts and subscription services.
    So let’s say that twitter is no longer allowed to extract money from the EU. The EU bans companies advertising on twitter.
    Any companies that have business in the EU (like selling to EU citizens) are no longer allowed to advertise on twitter.
    Paypal, visa etc is no longer allowed to take payments from EU citizens for twitter services.
    Any EU service that has twitter integrations is no longer allowed to charge for twitter features.
    Basically, twitter has no way of getting money from the EU.

    Why would twitter spend money to access the EU population. It’s a cost sink. Dead weight.
    There is no growth. Getting 50 million new EU users means a massive cost increase.
    Plus paying for that extra load on (say) US based servers, and their international backbone links. (Just because you can reach a server on the other side of the world for “free”, doesn’t mean commercial services can pump terabytes of data internationally for free).

    So yeh, the servers could stay located in the US where twitter operations HQ is. Twitter could disband their international headquarters, so they no longer have companies in the EU.
    But they wouldn’t be able to get any money from EU citizens. And if they tried to circumvent the rules, then they can be blocked by DNS and BGP. So the only way to access twitter is by a VPN.
    That didn’t work well in Brazil, and twitter caved in to the demands of the Brazil government.




  • I think most people would just use media server software like pixera, d3, touchdesigner etc to accomplish playback of video on a moving surface with feedback sensors.
    It’s established tech, plenty of integrations, and most companies that are able to deliver something like this aren’t a linux-first type of company.
    If it was for an installation, something bespoke might be made using Linux. But the cost of touchdesigner and a suitable computer are tiny compared to doing this using Linux and then supporting and documenting it (especially considering how widespread skills in touchdesigner/pixera/d3 are in the industry Vs more esoteric Linux skills)





  • If you are doing high bandwidth GPU work, then PCIe lanes of consumer CPUs are going to be the bottleneck, as they generally only support 16 lanes.
    Then there are the threadrippers, xeons and all the server/professional class CPUs that will do 40+ lanes of PCIe.

    A lane of PCIe3.0 is about 1GBps (Byte not bit).
    So, if you know your workload and bandwidth requirements, then you can work from that.
    If you don’t need full 16 lanes per GPU, then a motherboard that supports bifurcation will allow you to run 4 GPUs with 4 lanes each from a CPU that has 16 lanes if PCIe. That’s 4GBps per GPU, or 32Gbps.
    If it’s just for transcoding, and you are running into limitations of consumer GPUs (which I think are limited to 3 simultaneous streams), you could get a pro/server GPU like the Nvidia quadros, which have a certain amount of resources but are unlimited in the number of streams it can process (so, it might be able to do 300 FPS of 1080p. If your content is 1080p 30fps, that’s 10 streams). From that, you can work out bandwidth requirements, and see if you need more than 4 lanes per GPU.

    I’m not sure what’s required for AI. I feel like it is similar to crypto mining, massive compute but relatively small amounts of data.

    Ultimately, if you think your workload can consume more than 4 lanes per GPU, then you have to think about where that data is coming from. If it’s coming from disk, then you are going to need raid0 NVMe storage which will take up additional PCIe lanes.









  • The metadata is actually quite important.
    Sure, chances are it’s a “pending WhatsApp message” notification, but not the actual contents of the message.
    However, with enough metadata and by surveying traffic from WhatsApp data centers, someone could see User A accessed WhatsApps service, which generated a WhatsApp notification for User B.
    That might just be a coincidence, but with enough data and time, the probability that User A is talking to User B can be increased.
    If it also shows that Users C, D and E also get notifications at the same time, it is likely that all those users are in a group chat together.
    It’s called a timing attack.
    And perhaps it isn’t enough evidence to stand up in court, it can help build the profile of the users, and guide investigations to other possible accomplices.