G
Threads are competing execution lines in the same process (in your case, your Python program). They are competitors in the sense that they perform simultaneously, but each with their own execution line - almost as if they were different programs. The distinction is that different programs are in fact different: each has its own memory area and communication between them is not so simple (although it is possible by various methods such as Sockets, Pipes, Shared Memories, etc). Threads, on the other hand, perform within the same process, so they can share memory directly (and there, you need to take certain care, such as using traffic lights to ensure the order of access to the data and avoid corruption a thread writing while another reads).On a computer with only one CPU, no magic: threads do not perform really at the same time. The operating system (SO) is charged with "scaling" the threads to allow all to perform a bit at a time, generating the feeling (or the practical result) that they seem to be simultaneous. So the SO leaves the thread Running a little, then passing control to the thread B run a little, and then go back to thread A, and so on... And that's why in case you share memory and use traffic lights (note that No the case of your example program, which is very simple) it is necessary to take care also with locking situations (may be that the thread Stand still waiting for thread B to release a feature while thread B is standing waiting for the thread To release another feature - what generates one https://pt.wikipedia.org/wiki/Deadlock that stops your entire program). On the other hand, on a computer with more than one CPU, the threads can be executed simultaneously: one on each available and free CPU. The competition problems cited may still exist, but there is a better performance because the OS is easier to perform scaling (which still needs to occur).Well, in your program there are two threads that perform the same function. Therefore, they do the exact same thing and therefore tend to take the same time in each staggering period. Supposing a computer with a single CPU, the SO knows it can pass control of the single processor to another thread when one of them stops processing to do something involving the hardware or simply waiting, for example. Soon, when one of your threads makes one print, the time the printing takes to happen on the hardware the thread is not doing anything, and the passes the control of the single processor to the other thread run. The same happens when she does sleep.As your two threads do the same thing, the result tends to be well alternated (sometimes the order reverses, but there are always "pairs" executions of the two threads), as you have experienced:Carrinho : Ed 0
Carrinho : Paulo 0
Carrinho : Paulo 1.2
Carrinho : Ed 1.1
Carrinho : Ed 2.2
Carrinho : Paulo 2.4
Carrinho : Ed 3.3000000000000003
Carrinho : Paulo 3.5999999999999996
Carrinho : Paulo 4.8
Carrinho : Ed 4.4
Carrinho : Ed 5.5
Carrinho : Paulo 6.0
Carrinho : Ed 6.6
Carrinho : Paulo 7.2
Carrinho : Ed 7.699999999999999
Carrinho : Paulo 8.4
Carrinho : Ed 8.799999999999999
Carrinho : Paulo 9.6
Carrinho : Ed 9.899999999999999
Carrinho : Paulo 10.799999999999999
Carrinho : Ed 10.999999999999998
Carrinho : Paulo 11.999999999999998
Carrinho : Ed 12.099999999999998
Carrinho : Paulo 13.199999999999998
Carrinho : Ed 13.199999999999998
Carrinho : Paulo 14.399999999999997
Carrinho : Ed 14.299999999999997
Carrinho : Paulo 15.599999999999996
Carrinho : Ed 15.399999999999997
Carrinho : Paulo 16.799999999999997
Carrinho : Ed 16.499999999999996
Carrinho : Paulo 17.999999999999996
Carrinho : Ed 17.599999999999998
Carrinho : Paulo 19.199999999999996
Carrinho : Ed 18.7
Carrinho : Paulo 20.399999999999995
Carrinho : Ed 19.8
Carrinho : Paulo 21.599999999999994
Carrinho : Ed 20.900000000000002
However, try removing the line completely with the command time.sleep(0.3) and redirect the output to a text file (run programa.py > teste.txt, for example). Then, see the final result (it will perform very quickly and you will not see anything, because the result will be inside the text file). It'll be something like:Carrinho : Ed 0
Carrinho : Ed 1.1
Carrinho : Ed 2.2
Carrinho : Ed 3.3000000000000003
Carrinho : Ed 4.4
Carrinho : Ed 5.5
Carrinho : Ed 6.6
Carrinho : Ed 7.699999999999999
Carrinho : Ed 8.799999999999999
Carrinho : Ed 9.899999999999999
Carrinho : Ed 10.999999999999998
Carrinho : Ed 12.099999999999998
Carrinho : Ed 13.199999999999998
Carrinho : Ed 14.299999999999997
Carrinho : Ed 15.399999999999997
Carrinho : Ed 16.499999999999996
Carrinho : Ed 17.599999999999998
Carrinho : Ed 18.7
Carrinho : Ed 19.8
Carrinho : Ed 20.900000000000002
. . .
Carrinho : Paulo 0
Carrinho : Paulo 1.2
Carrinho : Paulo 2.4
Carrinho : Paulo 3.5999999999999996
Carrinho : Paulo 4.8
Carrinho : Paulo 6.0
Carrinho : Paulo 7.2
Carrinho : Paulo 8.4
Carrinho : Paulo 9.6
Carrinho : Paulo 10.799999999999999
Carrinho : Paulo 11.999999999999998
Carrinho : Paulo 13.199999999999998
Carrinho : Paulo 14.399999999999997
Carrinho : Paulo 15.599999999999996
Carrinho : Paulo 16.799999999999997
Carrinho : Paulo 17.999999999999996
Carrinho : Paulo 19.199999999999996
Carrinho : Paulo 20.399999999999995
Note how the first thread practically printed all your work before printing the second thread produce your results. This is because file writing is considerably faster than screen writing, and so the thread that started first (Ed) gives less opportunity for the OS to climb between it and the other. This result was from running the program on my computer, which has 8 processing cores (8 CPUs). There was nothing else running, so that the other CPUs were certainly idle. O https://pt.stackoverflow.com/questions/143552/entendendo-o-conceito-de-threads-na-pr%c3%a1tica-em-python/148111?noredirect=1#comment306782_148111 that this may be due to some difficulty with Python in redistributing threads among processors, which may be true. But perhaps the processing of the threads was so fast that the SO did not have time to perform any scaling.The fact is that, regardless of whether you have one or more CPUs available, there is no way to guarantee exactly the order in which the threads will be scaled, as this is in charge of the SO. Some SOs and languages allow the definition of priorities, but still the choice is in charge of the SO.So, if you really want to simulate the cars running one time frame, it is more common to use one thread and take care of yourself the escalation of the "tasks".That's what games, for example, usually do. You run in a previously established time interval (controlling the time of each "frame" of the animation, as if it were in the cinema) and perform the iteration step of each car instance, which can be implemented via Object Guidance, for example.The time frame is done as follows. Suppose you want each step of interaction to last approximately 30 milliseconds (this is merely an arbitrary choice for exemplification; I am not saying that it is the best or most correct choice - this depends on other issues that do not come to the case now). In the loop step (while), you do:Run Ed->move() and counts the time this call takes to process. Discount the processed time of 30 milliseconds. Save in a variable t.Run Paulo->move() and counts the time this call takes to process.Discount time processed in variable t.If there was some time left in the variable t, it is a sign that the calls were fast. Soon, you call sleep with what remains t to wait for the remainder and thus total the 30 milliseconds. If there's nothing left, you don't call sleep, because this "frame" has to end immediately to try to compensate for the time spent by the movement of cars.Final Note: Note that regardless of what was discussed above,
your cars move with distinct speeds. The first car
increases space in "time" by 10% to each interaction, while
second car increases time space by 20%♪. The key point of this
explanation is that to ensure that this movement is closer to
real, you need to ensure that the interaction time ranges
for each car are always the same. Maybe this is already running away
of his concern with understanding "threads", but I thought it was
useful explain. ♪ In fact the speed increment is fixed at 1.1 or 1.2 because
you use +=. The increment would be percentage as described if you used
*=.