(For context, I’m basically referring to Python 3.12 “multiprocessing.Pool Vs. concurrent.futures.ThreadPoolExecutor”…)

Today I read that multiple cores (parallelism) help in CPU bound operations. Meanwhile, multiple threads (concurrency) is due when the tasks are I/O bound.

Is this correct? Anyone cares to elaborate for me?

At least from a theorethical standpoint. Of course, many real work has a mix of both, and I’d better start with profiling where the bottlenecks really are.

If serves of anything having a concrete “algorithm”. Let’s say, I have a function that applies a map-reduce strategy reading data chunks from a file on disk, and I’m computing some averages from these data, and saving to a new file.

  • Zykino@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    16 days ago

    and you won’t use At “just” for a bit of concurrency. Right ?

    Is “At” a typo?

    Yes I wanted to talk about the Qt Framework. But with that much ways to do concurrency in the language’s core, I suspect you would use this framework for more than just its signal/slots feature. Like if you want their data structures, their network or GUI stack, …

    I’m not using Python, but I love to know the quirks of each languages.