Amdahl's Law (1967) notes that a parallel process always has some serial component to it, and the speedup by parallelization is limited by the serial path. It's the argument to be made for getting processors that have faster cores rather than more cores, because you get more incremental results by having fast cores than having a vast array of slow cores.
You can see the results of Amdahl's Law on a multicore processor
running something complex (like a big software build) by using htop
.
The usual experience is that the system buzzes along for a long time
with all the cores busy and then suddenly comes to a screeching halt
when the serial part of some effort (e.g. a serial linker) has its
turn to run. Your fancy 128-core system grinds away with only one or
two cores running through that serial part of the process.
I found this "Cornell Virtual Workshop" https://cvw.cac.cornell.edu/parallel/intro/index on parallel computing in my search for a good writeup on Amdahl's Law. It also introduces "Gustafson's Law" (1988) https://cvw.cac.cornell.edu/parallel/efficiency/amdahls-law which suggests that as researchers get access to more cores, they focus not on making their runtimes go faster with the added processing power, but in doing more work within an acceptable amount of time. So if you decide that it's good enough to run a test suite in 10 minutes, and your system gets faster by adding more cores, instead of pushing to reduce the time to something smaller you'll be inclined to run more tests within that 10 minute time window.
See http://www.johngustafson.net/pubs/pub13/amdahl.htm for Gustafson's formulation of this, "One does not take a fixed-size problem and run it on various numbers of processors except when doing academic research; in practice, the problem size scales with the number of processors."