If you want your process to start in the background you can either use system()
and call it in the same way your shell script did, or you can spawn
it:
import osos.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT
flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system
function, e.g.:
import osos.system("some_command &")
Basically, whatever you pass to the system
function will be executed the same as if you'd passed it to the shell in a script.
Use subprocess.Popen()
with the close_fds=True
parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocessif len(sys.argv) == 2:time.sleep(5)print 'track end'if sys.platform == 'darwin':subprocess.Popen(['say', 'hello'])else:print 'main begin'subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)print 'main end'
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py
has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS
Process Creation Flag to the underlying CreateProcess
function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008pid = subprocess.Popen([sys.executable, "longtask.py"],creationflags=DETACHED_PROCESS).pid
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout=
and then try to read()
, then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading
module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3import osimport subprocessimport sysimport threadingdef output_reader(proc, file):while True:byte = proc.stdout.read(1)if byte:sys.stdout.buffer.write(byte)sys.stdout.flush()file.buffer.write(byte)else:breakwith subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \open('log1.log', 'w') as file1, \open('log2.log', 'w') as file2:t1 = threading.Thread(target=output_reader, args=(proc1, file1))t2 = threading.Thread(target=output_reader, args=(proc2, file2))t1.start()t2.start()t1.join()t2.join()
sleep.py
#!/usr/bin/env python3import sysimport timefor i in range(4):print(i + int(sys.argv[1]))sys.stdout.flush()time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
010111212313
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:pid = os.fork()except OSError, e:## some debug outputsys.exit(1)if pid == 0:## eventually use os.putenv(..) to set environment variables## os.execv strips of args[0] for the argumentsos.execv(args[0], args)
You can use
import ospid = os.fork()if pid == 0:Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
Unlike some prior answers that use subprocess.Popen
, this answer uses subprocess.run
instead. The issue with using Popen
is that if the process is not manually waited for until completion, a stale <defunct>
entry remains in the Linux process table as seen by ps
. These entries can add up.
In contrast, with subprocess.run
, by design it waits for the process to complete, and so no such defunct entry will remain in the process table. Because subprocess.run
is blocking, it can be run in a thread. The rest of the code can continue after starting this thread. In this way, the process effectively runs in the background.
import subprocess, threadingkwargs = {stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True, **your_kwargs}threading.Thread(subprocess.call, args=(your_command,), kwargs=kwargs).start()