3

I am developing a Python application wherein some snippets, it requires a lot of CPU for calculations.

However, I realize that even at these points, the CPU never reaches more than 50% use.

Of course, the program slows down, while it has CPU "to spare" that could make it faster.

For example, the process below takes (in my PC), 15 seconds:

from math import *
import time
ini = time.time()

for x in range(10**8):
    a = cos(x)

print ("Total Time: ", time.time() - ini)

But during processing, only a few logical processors are used, and only 1 processor suffers more demand, yet it does not reach 100%.

This is my CPU before running the code above: enter image description here

And while running the code:

enter image description here

How to make Python use 100% CPU in critical processes?

Rogério Dec
  • 801
  • 8
  • 31
  • How much of your system ressources are divided between the processes is the job of your operating system. If you do parallel processing you might involve more then 1 cpu but thats it. as is your python runs in 1thread only. – Patrick Artner Oct 06 '18 at 21:42
  • 1
    @PatrickArtner I'm surprised you haven't mentioned the GIL here because I'm either missing something or this is designed behaviour of Python, regardless of how CPU-bound the task is – roganjosh Oct 06 '18 at 21:49
  • CPython _can't_ use all cores because of the Global Interpreter Lock (GIL) that was built in. This is why things like `numpy` exist (among other reasons). – roganjosh Oct 06 '18 at 21:54
  • So how could I make the code above use 100% CPU in all processors? – Rogério Dec Oct 06 '18 at 21:55
  • 1
    You should look into the `multiprocessing` module to leverage more CPU power – roganjosh Oct 06 '18 at 21:56
  • 4
    The GIL is irrelevant in this program, because there's only the one main thread. – Ned Batchelder Oct 06 '18 at 21:57
  • @RogérioDec You'll need to explicitly parallelize your computation, and use processes (not threads) to use more than one core. – Ned Batchelder Oct 06 '18 at 21:57
  • @NedBatchelder, thanks, but this is new for me. Could you create an answer with an example? – Rogério Dec Oct 06 '18 at 21:59
  • @NedBatchelder well I'm being schooled then. Why is the computation locked to one theoretical core? – roganjosh Oct 06 '18 at 21:59
  • What should I do to adapt the code above for it to use 100% CPU? – Rogério Dec Oct 06 '18 at 22:46
  • 1
    Do you specifically want a program that burns 100% of all CPUs and does no useful work? If not, please tell us what type of computation is being done in your actual program. The best solution depends on knowing what type of input, output, and computation is really done, and how many iterations your real loop has. – John Zwinck Oct 06 '18 at 22:53
  • @roganjosh "CPython can't use all cores because of the Global Interpreter Lock (GIL) that was built in" – CPython can't manipulate Python objects in more than one **thread** at the same time, though it can do it in as many **processes** as you want, thus utilising as many **cores** as you want. Moreover, there is no GIL when handling non-Python objects in CPython, therefore it can also utilise as many **cores** as you want using **threads**, as long as the GIL is released. – Eli Korvigo Oct 06 '18 at 22:53
  • I suspect underground maintenance threads such as the garbage collector and whatnot might be responsible for all this high CPU usage. Yes, "high", because your program is not multi-threaded and there shouldn't be any reason for it to be taking more than 1/8 of the CPU available, let alone 100% of it. – Havenard Oct 06 '18 at 22:58
  • @EliKorvigp I can accept that, but python, without multiprocessing, will be locked to a single core. Nothing in your comment suggests to me that this behaviour isn't linked to the GIL. Numpy can release the GIL and you'll see the CPU usage jump. If the GIL mechanism is not at play, why does a python program not use all cores in general work? – roganjosh Oct 06 '18 at 22:59
  • 2
    @RogérioDec CPU cores execute a sequence of instructions. Your program is a sequence of instructions. A CPU cannot know which of these instructions it makes sense to execute in parallell on different cores. Taking advantage of multiple CPU/cores is left over to the programmer- to take advantage of more CPUs/cores, you need to run several programs (proceses) in parallel or divide your program into several threads, and figure out yourself what part can be run in parallel. – nos Oct 06 '18 at 23:00
  • @JohnZwinck, of course, my program has a useful work. I just did a theoretical example to make it easier to understand. It's not worth putting all the code here. But the question remains: if a processing could be completed in 1/8 of the time, using all the CPU power, why do I have to settle for a slower 8x speed? – Rogério Dec Oct 06 '18 at 23:01
  • @nos, it seems very logical. So, in my case, should I refactor my code, for example, creating 8 independent loops in parallel (each with a different `range`), one for each thread, to obtain the full performance? – Rogério Dec Oct 06 '18 at 23:05

0 Answers0