To return, it strips any whitespace and returns it as a number.
Tested on Linux 4.4 and 4.9, but even an early Linux version should work: looking in man proc
and searching for the info on the /proc/$PID/status
file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.
Below is my function decorator which allows to track how much memory this process consumed before the function call, how much memory it uses after the function call, and how long the function is executed.
import timeimport osimport psutildef elapsed_since(start):return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))def get_process_memory():process = psutil.Process(os.getpid())return process.memory_info().rssdef track(func):def wrapper(*args, **kwargs):mem_before = get_process_memory()start = time.time()result = func(*args, **kwargs)elapsed_time = elapsed_since(start)mem_after = get_process_memory()print("{}: memory before: {:,}, after: {:,}, consumed: {:,}; exec time: {}".format(func.__name__,mem_before, mem_after, mem_after - mem_before,elapsed_time))return resultreturn wrapper
So, when you have some function decorated with it
from utils import track@trackdef list_create(n):print("inside list create")return [1] * n
You will be able to see this output:
inside list createlist_create: memory before: 45,928,448, after: 46,211,072, consumed: 282,624; exec time: 00:00:00
For Python 3.6 and psutil 5.4.5 it is easier to use memory_percent()
function listed here.
import osimport psutilprocess = psutil.Process(os.getpid())print(process.memory_percent())
Even easier to use than /proc/self/status
: /proc/self/statm
. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.
/proc/[pid]/statm
Provides information about memory usage, measured in pages.The columns are:
- size (1) total program size(same as VmSize in /proc/[pid]/status)
- resident (2) resident set size(same as VmRSS in /proc/[pid]/status)
- shared (3) number of resident shared pages (i.e., backed by a file)(same as RssFile+RssShmem in /proc/[pid]/status)
- text (4) text (code)
- lib (5) library (unused since Linux 2.6; always 0)
- data (6) data + stack
- dt (7) dirty pages (unused since Linux 2.6; always 0)
Here's a simple example:
from pathlib import Pathfrom resource import getpagesizePAGESIZE = getpagesize()PATH = Path('/proc/self/statm')def get_resident_set_size() -> int:"""Return the current resident set size in bytes."""# statm columns are: size resident shared text lib data dtstatm = PATH.read_text()fields = statm.split()return int(fields[1]) * PAGESIZEdata = []start_memory = get_resident_set_size()for _ in range(10):data.append('X' * 100000)print(get_resident_set_size() - start_memory)
That produces a list that looks something like this:
00368640368640368640638976638976909312909312909312
You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.
I like it, thank you for @bayer. I get a specific process count tool, now.
# Megabyte.$ ps aux | grep python | awk '{sum=sum+$6}; END {print sum/1024 " MB"}'87.9492 MB# Byte.$ ps aux | grep python | awk '{sum=sum+$6}; END {print sum " KB"}'90064 KB
Attach my process list.
$ ps aux | grep pythonroot 943 0.0 0.1 53252 9524 ? Ss Aug19 52:01 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pidroot 950 0.6 0.4 299680 34220 ? Sl Aug19 568:52 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pidroot 3803 0.2 0.4 315692 36576 ? S 12:43 0:54 /usr/bin/python /usr/local/bin/beaver -c /etc/beaver/beaver.conf -l /var/log/beaver.log -P /var/run/beaver.pidjonny 23325 0.0 0.1 47460 9076 pts/0 S+ 17:40 0:00 pythonjonny 24651 0.0 0.0 13076 924 pts/4 S+ 18:06 0:00 grep python
import os, win32api, win32con, win32processhan = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
For Unix systems command time
(/usr/bin/time) gives you that info if you pass -v. See Maximum resident set size
below, which is the maximum (peak) real (not virtual) memory that was used during program execution:
$ /usr/bin/time -v ls /Command being timed: "ls /"User time (seconds): 0.00System time (seconds): 0.01Percent of CPU this job got: 250%Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00Average shared text size (kbytes): 0Average unshared data size (kbytes): 0Average stack size (kbytes): 0Average total size (kbytes): 0Maximum resident set size (kbytes): 0Average resident set size (kbytes): 0Major (requiring I/O) page faults: 0Minor (reclaiming a frame) page faults: 315Voluntary context switches: 2Involuntary context switches: 0Swaps: 0File system inputs: 0File system outputs: 0Socket messages sent: 0Socket messages received: 0Signals delivered: 0Page size (bytes): 4096Exit status: 0
Using sh and os to get into python bayer's answer.
float(sh.awk(sh.ps('u','-p',os.getpid()),'{sum=sum+$6}; END {print sum/1024}'))
Answer is in megabytes.