First, some context.

I’ve written a program that starts running when I log on and produces data every two seconds. This daemon keeps all the collected data in memory until it gets terminated (usually when I shutdown the system), when it will dump the collected data to a file on disk.

(These are questionable design decisions, I know, but not the point of this post. Though feel free to comment on them, anyway).

I’ve written another program that reads the data file and graphs it. To get the most current data, I can send the USR1 signal to the daemon, which causes it to dump its data immediately. After restarting the renderer, I can analyze the most current data.

The tech (pregnant women and those faint of heart, be warned)

  • The daemon is written in TypeScript and executed through a on-the-fly transpiler in Node.
  • The data file is just a boring JSON dump.
  • systemd is in charge of starting and stopping the daemon
  • The renderer is a static web page served via a python3 server that uses compiled TypeScript to draw pretty lines on the screen via a charting library.
  • All runs on Linux. Mint, to be specific.

As I’m looking for general ideas for my problem, you are free to ignore the specifics of that tech stack and pretend everything was written in Rust.

Now to the question.

I would like to get rid of the manual sending of the signal and refreshing the web page. I would like your opinions on how to go about this. The aim is to start the web server serving the drawing code and have each data point appear as it is generated by the daemon.

My own idea (feel free to ignore)

My first intuition about this was to have the daemon send its data through a Unix pipe. Using a web server, I could then forward these messages through a WebSocket to the renderer frontend. However, it’s not guaranteed that the renderer will ever start, so a lot of messages would queue up in that pipe – if that is even possible; haven’t researched that yet.

I’d need a way for the web server to inform the daemon to start writing its data to a socket, and also a way to stop these messages. How do I do that?

I could include the web server that serves the renderer in the daemon process. That would eliminate the need for IPC. However, I’m not sure if that isn’t too much mixing of concerns. I would like to have the code that produces the data to be as small as possible, so I can be reasonably confident that it’s capable of running in the background for an extended period of time without crashing.

Another way would be to use signals like I did for the dumping of data. The web server could send, for instance, USR2 to make the daemon write its data to a pipe. But This scenario doesn’t scale well – what if I want to deliver different kinds of messages to the daemon? There are only so many signals.

  • dragonfly4933@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Of the things people complain about that systemd brings in, this is among the least offensive. It makes sense for an init system to provide such functionality, the function of spawning new system processes.

    Additionally, in modern systems it doesn’t make sense to use such features. Spawning a new process per request or on demand doesn’t gain you much and does reduce performance.

    Spawning new processes on most OS is pretty slow compared to other operations. Additionally, there would also be an increase in latency as the new process needs to be loaded, whereas most software these days can handle the new request in more efficient ways.

    I think you can also try to reuse the same process for multiple requests, stopping it only once it has been quiet for a while. But this still doesn’t really help much.

    Historically, i think it was used to try to save memory. But today its a bigger nusance than it is worth. I just checked how much memory sshd is using, and i think it is less than 10mb.

    total kB 8508 6432 1160

    And to be clear, you theoretically can’t save much if any memory doing this because you must have enough memory available to be able to run the process, otherwise bad things will happen or some other process gets oomed.

    Additionally, spawning a new process per request can represent an availability violation. An attacker could launch a series of very slow connections to a server spawning a new process per request, causing a depletion of resources.

    With all that said, I wouldn’t say there are no uses at all for this, it can be useful to make very minimal network connected software that does some very basic stuff in a secure network.