• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: November 18th, 2023

help-circle
  • The problem with making a custom web server is that you take responsibility for re-solving all the non-obvious security vulnerabilities. I always try to delegate as much network-facing code as possible to a mature implementation someone else wrote for that reason.

    Here’s how I’d implement it, based on stuff I’ve done before:

    1. Start with either Actix Web or Axum for the server itself.
    2. Use std::thread to bring up mpv in a separate thread.
    3. Use an async-capable channel implementation like flume as a bridge between the async and sync worlds.
    4. If the async side needs to wait on the sync side, include the sending side of a tokio::sync::oneshot in the “job order” object your async code drops into the channel and then have the async task await the receiving side. That way, you can have the async task block on the some kind of completion signal from the sync thread without blocking the thread(s) underlying the task executor.

  • I’m sure other people have a more teachable way of learning these things but I’m just one of those nerdy guys who’s been reading technical materials for pleasure since he was in elementary school and gathered the core “this will tell you how the system is designed so you know what to ask about” knowledge along the way.

    For example, I just ran across The TTY Demystified, Things Every Hacker Once Knew, The Art of UNIX Programming, and A Digital Media Primer for Geeks on my own. (Sort of the more general version of “It showed up in the YouTube sidebar one day” or “I landed on it while wandering Wikipedia for fun”.)

    Beyond that, it’s mostly “exposing yourself to things the professionals experience”, like running a Linux distro like Archlinux or Gentoo Linux which expect you to tinker under the hood and give you documentation to do so, maybe working through LinuxFromScratch to get exposed to how the pieces of Linux fit together, reading periodicals like LWN (articles become un-paywalled after a week, if you’re tight on money or need time to convince yourself it’s worthwhile), and watching conference talks on YouTube like code::dive conference 2014 - Scott Meyers: Cpu Caches and Why You Care or “NTFS really isn’t that bad” - Robert Collins (LCA 2020).

    (I switched to Linux permanently while I was still in high school, several years before YouTube even existed, and I’m only getting back into Windows now that I’m buying used books to start learning hobby-coding for MS-DOS beyond QBasic 1.1, Windows 3.1, Windows 95 beyond Visual Basic 6, and classic Mac OS, so I haven’t really picked up much deep knowledge for Windows.)

    The best I can suggest for directed learning is to read up on how the relevant system (eg. the terminal, UNIX I/O) works until you start to get a sense for which are the right questions to ask.


  • What you’re running into is that read() does blocking I/O by default and, while you can change that, both approaches (checking for pending data before reading or setting stdin into non-blocking mode so it’ll return immediately either way) require different code for Windows and for POSIX, so it’s best to let your platform abstraction (i.e. termwiz) handle that.

    I have no experience with Bevy or Termwiz, but see if this does what you want once you’ve fixed any “I wrote this but never tried to compile it” bugs:

    use std::time::Duration;
    
    fn flush_stdin(main_terminal: terminal::SystemTerminal) {
        while let Ok(Some(_)) = main_terminal.poll_input(Some(Duration::ZERO)) { }
    }
    

    If I’ve understood the termwiz docs correctly, that’ll pull and discard keypresses off the input buffer until none are left and then return.

    Note that I’m not sure how it’ll behave if you call enter_name a second time and you’re still in cooked mode from a previous enter_name. My experience is with raw mode. (And it’s generally bad form to have a function change global state as a hidden side-effect and not restore what it was before.)


  • The answer depends on what’s actually going on. I’ve yet to do this sort of thing in Rust but, when I was doing it in Python and initializing curses for a TUI (i.e. like a GUI but made using text), I remember the curses wrapper’s initialization stuff handling that for me.

    Because of the way terminal emulation works, there are actually two buffers: the one inside the fake terminal hardware that lets the user accumulate and backspace characters before sending the data over the fake wire when you press Enter and the one on the receiving end of the wire.

    Paste your initialization code for your curses wrapper and I’ll take a look at it after I’ve had breakfast.


  • For an interactive terminal program with the characteristics you want, you need to do two things:

    1. Flush stdin before using it, similar to what things like sudo do. (Basically, once your program is all started up and ready to go, read everything that’s already in there and throw it away.)
    2. Disable the terminal’s default echoing behaviour which traces back to when the terminal was a completely separate device possibly half-way around the world from the machine you were logged into on the other side of a slow modem and you didn’t want the latency from waiting for the machine on the far end to handle echo. (See this if you want more context.)

    Windows and POSIX (Linux, the BSDs, macOS, and basically everything else of note) have different APIs for it. On the Linux side, you want something that wraps the curses library, which can put your terminal in “raw mode” or some other configuration that operates unbuffered and lacking terminal-side echo. On the windows side, it can either be done by wrapping the Windows APIs directly or by using the pdcurses library.

    Something like termwiz should do for both… though you’ll probably need to reimplement print_typewriter but that should be trivial from what I see of its README.


  • Wouldn’t you want your SSG to include a dev-server anyways? Zola has zola serve which even does incremental rebuilds, but something less sophisticated should be easy to add to your own (only took me a weekend to add to hinoki including rebuilds, though mostly starting the build from scratch on changes).

    I don’t want the overhead of looping through an HTTP client and server implementation in places it doesn’t need to. I design my tooling based on a test target roughly comparable to the Raspberry Pi 4, performance-wise.


  • Have you investigated some of the options already now?

    A bunch of other things came up, forcing me to put the project on the back burner.

    (eg. Most recently (about a week ago), I had my 6-month-old boot drive go bad and it took me several days to rush-order a new NVMe drive, learn ZFSBootMenu, restore my backups, and redesign my backup strategy so that, when the original comes back from RMA, if the ZFS mirroring and snapshotting and the trick to mirror the EFI system partition isn’t enough to ensure high availability, a full, bootable backup of the NVMe pool’s contents can be restored in 2 hours or less with the sequential read performance of my first tier of backup being the bottleneck.)

    missing flexibility for output paths has been an annoyance.

    Hmm. We’ll see if I wind up using it. Avoiding deadlinks has been non-negotiable to the point where replicating my WordPress blog on a local httpd, spidering it, and logging the URLs I need to preserve has been one of the big hold-ups.

    is that I found Zola to be quite hard to hack on

    Hmm. Potentially a reason I’ll wind up making my own, given that I’ve written SSGs in Python before (eg. https://vffa.ficfan.org/ is on a homebrew Python SSG) and I’ve already got a single-page pulldown-cmark frontend I’ve gone way overboard on the features for and a basic task-specific Rust SSG for my mother’s art website that I can merge with it and generalize.

    EDIT: Here’s a screenshot of what I mean by saying I’ve gone way overboard.

    and Tera (its templating lang) to be a little buggy / much less elegant than minijinja API-wise.

    Hmm. Noted. I think i’m using Tera for my mother’s SSG.

    Re. link checking, have you seen lychee? When I found out about it, the priority of building my own link checker in my SSG (that was only an idea at that point, I think) basically dropped to zero :D

    You accidentally re-used the link to the Zola issue tracker there. I have not yet checked out lychee and I’m getting a docs.rs error when clicking the examples link, so all I can say is that it’ll depend on how amenable it is to checking a site rooted in a file:// URL so I don’t need the overhead and complexity of spinning up an HTTP server to check for broken links.





  • It still returns relative paths if the input was relative

    False

    and it doesn’t resolve “…”

    I’ll assume you meant .., since ... is an ordinary filename. (Aside from the “who remembers …?” feature introduced in Windows 95’s COMMAND.COM where cd ... was shorthand for doing cd .. twice and you could omit the space after cd if your target was all dots.)

    The reason it doesn’t do that is that, when symlinks get involved, /foo/bar/.. does not necessarily resolve to /foo and making that assumption could introduce a lurking security vulnerability in programs which use it.


  • Ahh, yeah. In the beginning, Rust was built around the idea that individual files and invoking rustc are internal details, only relevant for integration into some other build system like Bazel, and that “normal” Rust projects need to be inside a Cargo project structure.

    There is in-development work to have official support for something along the lines of rust-script, but it’s still just that… in development. If you want to keep an eye on it, here is the tracking issue.


  • That’s not how it’s supposed to be.

    but for example Vec::new() doesn’t highlight anything at all.

    If I do Vec::new(foo, bar), I get expected 0 arguments, found 2 (rust-analyzer E0107).

    but things like passing too many arguments into the println macro doesn’t throw an error.

    I don’t get that either, but I’m still running with the Vim configuration I setup on my previous PC from 2011, where I turned off checks that require calling cargo check or cargo clippy in the background. From what I remember, a properly functioning default rust-analyzer config should pick up and display anything cargo check will catch and you can switch it to cargo clippy for even stricter results.

    Or how shown in this example, it did not autocomplete the clone method, while also just accepting .clo; at the end of a String (it also didn’t autocomplete “String”).

    I get clone(), clone_into(), and clone_from() as proposed completions for .clo on my as-you-type completions for foo where let foo = String::new(); and it proposed a whole bunch of things, with String at the top when I typed Stri. (eg. the stringify! macro, OsString, mixed in with various results from other crates in the project like serde)







  • I’ll try to fit in sampling it at some point in the near future as a candidate for building on.

    I just decided to finally double down and do the work to switch away from WordPress to GitHub Pages and:

    • Jekyll is still hell to get running locally for testing without erroring out during the install
    • Pelican seems like it’d be more trouble than it’s worth to get what I want
    • I insist that no links be broken in the switchover (Doing this to my standards was a big part of what I wound up procrastinating, since I basically need to install WordPress locally and then write something which spiders the entire site and verifies that each path is also present in the new site and the page’s contents are identical when run through a filter to cut away the site template and normalize any irrelevant rendering differences.)
    • I already have a pulldown-cmark-based CLI that I wrote a couple of years ago to render single documents and it’d be nice to retrofit it (or at least its features) onto something Rust-based for my blog. (Hell, just a couple of days ago, after implementing support for shortcodes, I got carried away implementing a complete set of shortcodes for rendering depictions of gamepad buttons like :btn-l-snes: within passages of text. Bit of a shame, though, that I’d have to either patch pulldown-cmark or maintain the smart punctuation and strikethrough extensions externally, if I want to hook in shortcodes early enough in the pipeline to be able to implement Compose key-inspired ones like :'e:/:e': → é or :~n:/:n~: → ñ without breaking things.)
    • Since my plans for comments are, to the best of my knowledge, unique, I need something in a language I’m willing to hack on and potentially maintain my own fork of. (Jekyll would have been achieved via a preprocessor.)
    • I want something where I’m at least willing to port the internal broken link detection from one of my old bespoke Python static site generators, which means either Python or Rust. (Ideally, I’ll also re-create the support for performing HTML and CSS validation on the generated output.)
    • Given how many things I either have in my existing single-page renderer (eg. automatic ToC generation with a bespoke scrollspy implementation, Syntect integration, ```svgbob fenced code blocks which produce rendered diagrams, <price></price> tags which provide currency-conversion estimation tooltips with the exchange rate defined in a central location, etc.) or have plans for (eg. plotters-generated charts with some kind contributed extension equivalent to matplotlib’s xkcd mode because it’s important, Wikipedia-style infobox sidebars, etc.), I want to experiment with a WebAssembly-based plugin API so I’m not throwing the kitchen sink in.