• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • If it is the first thing, just put the db setup code you’re using in one file, call it “database.py

    database.py

    # the code you commonly use, ending with
    database = ...
    

    From a second file in the same directory, write: main_program.py

    from database import database
    # The first "database" here is the module name.
    # The second "database" is a variable you set inside that module.
    # You can also write this as follows:
    # import database
    # ... and use `database.database` to refer to the same thing
    # but that involves "stuttering" throughout your code.
    
    # use `database` as you would before - it refers to the "database" object that was found in the "database.py" module
    

    then run it with python main_program.py

    The main thing to realise here is that there are two names involved. One’s the module, the other is the variable (or function name) you set inside that module that you want to get access to.


  • There’s not much here to go on. Are you asking how to write a module that you can import?

    Are these the same set of DB files every time? Are the columns and other configurations the same? Are you writing new python code every month?

    Are you using some ETL process to spit out a bunch of files that you’d like to have imported and available easily? Are the formats the same but the filenames differ?

    I think it’s the first thing you’re after. There are a bunch of tutorials knocking around about this, eg, https://www.digitalocean.com/community/tutorials/how-to-write-modules-in-python-3

    You might also be asking: if I write a module, how do I make it available for all my new python projects to use? You could just copy your whatever-my-module-is-called.py file around to your new projects (this might be simplest) but if you’re also expecting to be updating it and would like all of your projects to use the updated code, there are alternatives. One is to add the directory containing it to your PYTHONPATH. Another is to install it (in edit mode) in your python environment.

    [I get the impression you’re a data person rather than a programmer - perhaps you have a colleague who’s more of the latter you can tap up for this? It doesn’t have to be difficult, but there’s typically a little bit of ceremony involved in setting up a shared module however you choose to do it.]



  • Once the water companies were privatised, they took out massive loans and performed no maintenance. The loans were purely to pay shareholder dividends. Now they’re loaded down with debt.

    Atop this, that crumbling infrastructure can’t handle the increased water flow that’s due to rainfall increases. So there’s been a general trend of dumping raw sewage into rivers (the fines are cheaper op ex than the capex needed to fix the situation).

    It’s parasitic capitalism at its finest.




  • Casey’s video is interesting, but his example is framed as moving from 35 cycles/object to 24 cycles/object being a 1.5x speedup.

    Another way to look at this is, it’s a 12-cycle speedup per object.

    If you’re writing a shader or a physics sim this is a massive difference.

    If you’re building typical business software, it isn’t; that 10,000-line monster method does crop up, and it’s a maintenance disaster.

    I think extracting “clean code principles lead to a 50% cost increase” is a message that needs taking with a degree of context.





  • The test case purported to be bad data, which you presumably want to test the correct behaviour of your dearchiver against.

    Nothing this did looks to involve memory safety. It uses features like ifunc to hook behaviour.

    The notion of reproducible CI is interesting, but there’s nothing preventing this setup from repeatedly producing the same output in (say) a debian package build environment.

    There are many signatures here that look “obvious” with hindsight, but ultimately this comes down to establishing trust. Technical sophistication aside, this was a very successful attack against that teust foundation.

    It’s definitely the case that the stack of C tooling for builds (CMakeLists.txt, autotools) makes obfuscating content easier. You might point at modern build tooling like cargo as an alternative - however, build.rs and proc macros are not typically sandboxed at present. I think it’d be possible to replicate the effects of this attack using that tooling.






  • Came here to say the same thing. The git book is an afternoon’s reading. It’s well worth the time - even if you think you know git.

    People complain about the UX of the cli tool (perhaps rightly) but it’s honestly little different from the rest of the unix cli experience: ad hoc, arbitrary, inconsistent.

    What’s important is a solid mental model and the vocabulary of primitive and compound operations built with it. How you spell it in the cli is just a thing you learn as you go.