They/Them, agender-leaning scalie.

ADHD software developer with far too many hobbies/trades: AI, gamedev, webdev, programming language design, audio/video/data compression, software 3D, mass spectrometry, genomics.

Learning German (B2), Chinese (HSK 3-4ish), French (A2).

  • 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • The website does a bad job explaining what its current state actually is. Here’s the GitHub repo’s explanation:

    Memory Cache is a project that allows you to save a webpage while you’re browsing in Firefox as a PDF, and save it to a synchronized folder that can be used in conjunction with privateGPT to augment a local language model.

    So it’s just a way to get data from browser into privateGPT, which is:

    PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an Internet connection. The project provides an API offering all the primitives required to build private, context-aware AI applications.

    So basically something you can ask questions like “how much butter is needed for that recipe I saw last week?” and “what are the big trends across the news sites I’ve looked at recently?”. But eventually it’ll automatically summarize and data mine everything you look at to help you learn/explore.

    Neat.


  • I agree that older commercialized battery types aren’t so interesting, but my point was about all the battery types that haven’t had enough R&D yet to be commercially mass-produced.

    Power grids don’t care much about density - they can build batteries where land is cheap, and for fire control they need to artificially space out higher-density batteries anyway. There are heaps of known chemistries that might be cheaper per unit stored (molten salt batteries, flow batteries, and solid state batteries based on cheaper metals), but many only make sense for energy grid applications because they’re too big/heavy for anything portable.

    I’m saying it’s nuts that lithium ion is being used for cases where energy density isn’t important. It’s a bit like using bottled water on a farm because you don’t want to pay to get the nearby river water tested. It’s great that sodium ion could bring new economics to grid energy storage, but weird that the only reason it got developed in the first place was for a completely different industry.








  • Honestly, I don’t think that there’s room for a competitor until a whole new paradigm is found. PyTorch’s community is the biggest and still growing. With their recent focus on compilation, not only are TF and Jax losing any chance at having an advantage, but the barrier to entry for new competitors is becoming much higher. Compilation takes a LOT of development time to implement, and it’s hard to ignore 50-200% performance boosts.

    Community size tends to ultimately drive open source software adoption. You can see the same with the web frameworks - in the end, most people didn’t learn React because it was the best available library, they learned it because the massive community had published so many tutorials and driven so many job adverts that it was a no-brainer to choose it over Angular, Vue, etc. Only the paradigm-shift libraries like Svelte and Htmx have had a chance at chipping away at React’s dominance.


  • The easiest way to get the basics is to search for articles, online courses, and youtube videos about the specific modules you’re interested in. Papers are written for people who are already deep in the field. You’ll get there, but they’re not the most efficient way to get up to speed. I have no experience with textbooks.

    It helps to think of PyTorch as just a fancy math library. It has some well-documented frameworky structure (nn.Module) and a few differentiation engines, but all the deep learning-specific classes/functions (Conv2d, BatchNorm1d, ReLU, etc.) are just optimized math under the hood.

    You can see the math by looking for projects that reimplement everything in numpy, e.g. picoGPT or ConvNet in NumPy.

    If you can’t get your head around the tensor operations, I suggest searching for “explainers”. Basically for every impactful module there will be a bunch of “(module) Explained” articles or videos out there, e.g. Grouped Convolution, What are Residual Connections. There are also ones for entire models, e.g. The Illustrated Transformer. Once you start googling specific modules’ explainers, you’ll find people who have made mountains of them - I suggest going through their guides and learning everything that seems relevant to what you’re working on.

    If you’re not getting an explanation of something, just google and find another one. People have done an incredible job of making this information freely accessible in many different formats. I basically learned my way from webdev to an AI career with a couple years of casually watching YouTube videos.




  • Some minor/hard-to-notice health-related things can dramatically reduce alcohol tolerance and/or give “hangovers” shortly after starting a session.

    For me, inflammation is a big cause. I have (barely noticeable) cat allergies, and (obvious but hard to avoid) food intolerances & gut issues. If I don’t stay on top of avoiding triggers, my alcohol tolerance goes from multiple G&Ts giving a nice buzz, to 1-2 sips of G&T giving dizziness and headaches. Electrolyte imbalance can also cause it. I’ve found I have to add magnesium and potassium salt to my diet, or else I generally feel tired more, and my alcohol tolerance plummets. Once you start controlling these factors, you’ll start getting clear feedback from your body when you have too much or too little salt, in the form of water and food tasting different and general feelings of tension or tiredness.

    My advice: try antihistamines, easily-digestible meals, and/or sports drinks for a few days before you drink. If those help your tolerance, you probably have some health stuff going on - figure it out and you’ll probably find a way to generally feel better.


  • Newtra@pawb.socialtoTechnology@lemmy.worldUnity apologises.
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    They’ve had days to prepare this response. They didn’t rescind or explain the one thing that people universally hated, which means they’re just stalling and trying to save their reputation without actually changing trajectory.

    We’ve seen this corporate bullshit so much in recent years. No more “benefit of the doubt”.


  • ooo, I love this. It reminds me of how nice C#'s LINQ is…

    “Pipeline style” DB queries have some interesting advantages as well:

    • It’s straightforward to write efficient queries for DBs that don’t include a query optimizer stares at Datomic
    • You can split the pipeline into server-side and client-side steps when working with less capable DBs stares at most of NoSQL
    • It would be much easier to transition from a pipeline API to a non-text-based API so that our ORMs/query builders can directly talk to DBs without the overhead of generating and parsing SQL.

  • I still use Google for ~95% of my queries because I like real sources, comprehensive documentation, and not having to read a wall of text when a one-line answer would have sufficed.

    ChatGPT is a good replacement for Quora/Stack Exchange for explaining general knowledge stuff like other languages’ grammar and simple science, as well as finding authors/books/movies from descriptions when you’ve forgotten their names.

    Bard is… kinda dumb. I gave it a few chances, but it was nothing compared to ChatGPT’s free tier.