Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.

🌎 linktr.ee/bahmanm

Views are my own.

  • 29 Posts
  • 88 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle



  • Good question!

    IMO a good way to help a FOSS maintainer is to actually use the software (esp pre-release) and report bugs instead of working around them. Besides helping the project quality, I’d find it very heart-warming to receive feedback from users; it means people out there are actually not only using the software but care enough for it to take their time, report bugs and test patches.




  • I usually capture all my development-time “automation” in Make and Ansible files. I also use makefiles to provide a consisent set of commands for the CI/CD pipelines to work w/ in case different projects use different build tools. That way CI/CD only needs to know about make build, make test, make package, … instead of Gradle/Maven/… specific commands.

    Most of the times, the makefiles are quite simple and don’t need much comments. However, there are times that’s not the case and hence the need to write a line of comment on particular targets and variables.


  • Can you provide what you mean by check the environment, and why you’d need to do that before anything else?

    One recent example is a makefile (in a subproject), w/ a dozen of targets to provision machines and run Ansible playbooks. Almost all the targets need at least a few variables to be set. Additionally, I needed any fresh invocation to clean the “build” directory before starting the work.

    At first, I tried capturing those variables w/ a bunch of ifeqs, shells and defines. However, I wasn’t satisfied w/ the results for a couple of reasons:

    1. Subjectively speaking, it didn’t turn out as nice and easy-to-read as I wanted it to.
    2. I had to replicate my (admittedly simple) clean target as a shell command at the top of the file.

    Then I tried capturing that in a target using bmakelib.error-if-blank and bmakelib.default-if-blank as below.

    ##############
    
    .PHONY : ensure-variables
    
    ensure-variables : bmakelib.error-if-blank( VAR1 VAR2 )
    ensure-variables : bmakelib.default-if-blank( VAR3,foo )
    
    ##############
    
    .PHONY : ansible.run-playbook1
    
    ansible.run-playbook1 : ensure-variables cleanup-residue | $(ansible.venv)
    ansible.run-playbook1 : 
    	...
    
    ##############
    
    .PHONY : ansible.run-playbook2
    
    ansible.run-playbook2 : ensure-variables cleanup-residue | $(ansible.venv)
    ansible.run-playbook2 : 
    	...
    
    ##############
    

    But this was not DRY as I had to repeat myself.

    That’s why I thought there may be a better way of doing this which led me to the manual and then the method I describe in the post.


    running specific targets or rules unconditionally can lead to trouble later as your Makefile grows up

    That is true! My concern is that when the number of targets which don’t need that initialisation grows I may have to rethink my approach.

    I’ll keep this thread posted of how this pans out as the makefile scales.


    Even though I’ve been writing GNU Makefiles for decades, I still am learning new stuff constantly, so if someone has better, different ways, I’m certainly up for studying them.

    Love the attitude! I’m on the same boat. I could have just kept doing what I already knew but I thought a bit of manual reading is going to be well worth it.








  • I didn’t like the capitalised names so configured xdg to use all lowercase letters. That’s why ~/opt fits in pretty nicely.

    You’ve got a point re ~/.local/opt but I personally like the idea of having the important bits right in my home dir. Here’s my layout (which I’m quite used to now after all these years):

    $ ls ~
    bin  
    desktop  
    doc  
    downloads  
    mnt  
    music  
    opt 
    pictures  
    public  
    src  
    templates  
    tmp  
    videos  
    workspace
    

    where

    • bin is just a bunch of symlinks to frequently used apps from opt
    • src is where i keep clones of repos (but I don’t do work in src)
    • workspace is a where I do my work on git worktrees (based off src)




  • RE Go: Others have already mentioned the right way, thought I’d personally prefer ~/opt/go over what was suggested.


    RE Perl: To instruct Perl to install to another directory, for example to ~/opt/perl5, put the following lines somewhere in your bash init files.

    export PERL5LIB="$HOME/opt/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"
    export PERL_LOCAL_LIB_ROOT="$HOME/opt/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"
    export PERL_MB_OPT="--install_base \"$HOME/opt/perl5\""
    export PERL_MM_OPT="INSTALL_BASE=$HOME/opt/perl5"
    export PATH="$HOME/opt/perl5/bin${PATH:+:${PATH}}"
    

    Though you need to re-install the Perl packages you had previously installed.




  • I got to admit that your point about the presentation skills of the author are all correct! Perhaps the reason that I was able to relate to the material and ignore those flaws was that it’s a topic that I’ve been actively struggling w/ in the past few years 😅

    That said, I’m still happy that this wasn’t a YouTube video or we’d be having this conversation in the comments section (if ever!) 😂


    To your point and @krnpnk@feddit.de’s RE embedded systems:

    That’s absolutely true that such a mindset is probably not going to work in an embedded environment. The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you’ve got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.

    That’s indeed an expensive setup, esp for embedded software.


    The narrow scope and the stylistic problem aside, I believe the author’s view is correct, if a bit radical.
    One of major pain points of troubleshooting distributed systems is sifting through the logs produced by different services and teams w/ different takes of what are the important bits of information in a log message.

    It get extremely hairy when you’ve got a non-linear lifeline for a request (ie branches of execution.) And even worse when you need to keep your logs free of any type of information which could potentially identify a customer.

    The article and the conversation here got me thinking that may be a combo of tracing and structured logging can help simplify investigations.


  • Thanks for sharing your insights.


    Thinking out loud here…

    In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:

    • In the case of a linear (single branch) timeline you can always “query” by a request ID and order by the timestamps and that’s pretty much what tracing will do too.
    • Things, however, get complicated when you’ve a timeline w/ multiple branches.
      For example, consider the following relatively simple diagram.
      Reconstructing the causality and join/fork relations between the executions nodes is almost impossible using traditional logs whereas a tracing solution will turn this into a nice visual w/ all the spans and sub-spans.

    That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I’m not sure a tracing solution will be able to provide.


    they should complement each other

    Yes! You nailed it 💯

    Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.





  • I think I understand where RMS was coming from RE “recursive variables”. As I wrote in my blog:

    Recursive variables are quite powerful as they introduce a pinch of imperative programming into the otherwise totally declarative nature of a Makefile.

    They extend the capabilities of Make quite substantially. But like any other powerful tool, one needs to use them sparsely and responsibly or end up w/ a complex and hard to debug Makefile.

    In my experience, most of the times I can avoid using recursive variables and instead lay out the rules and prerequisites in a way that does the same. However, occasionally, I’d have to resort to them and I’m thankful that RMS didn’t win and they exist in GNU Make today 😅 IMO purist solutions have a tendency to turn out impractical.