Solver parallelization and parametric distribution on a single machine
Parallelization and distribution are two important concepts in the field of computer science and software engineering. They both refer to techniques that enable software programs to execute multiple tasks simultaneously, thereby increasing their speed and efficiency.
Parallelization involves breaking down a large task into smaller sub-tasks that can be executed concurrently on different processors or cores. This technique is often used in high-performance computing, scientific simulations, and machine learning algorithms that require massive amounts of computation. Parallelization can be achieved using shared memory architectures, such as multi-core processors, or distributed memory architectures, such as computer clusters.
Distribution, on the other hand, involves dividing a task into smaller parts and distributing them across multiple machines that work together to complete the task. This technique is commonly used in distributed computing applications, such as for Flux with the distribution of several configurations of a same project with different values of parameters (geometrical & physical parameters).
Both parallelization and distribution have their own benefits and drawbacks, and the choice between the two depends on the specific requirements of the task at hand. Parallelization is well-suited for projects with heavy mesh.
Distribution, on the other hand, is well-suited for tasks that require the processing of large amounts of data, such as a heavy parametric distribution of several Flux projects through different values of parameters (geometrical & physical parameters)