multithreading - Linux - Difference between migrations and switches? -
by looking @ scheduling stats in /proc/<pid>/sched
, can output this:
[horro@system ~]$ cat /proc/1/sched systemd (1, #threads: 1) ------------------------------------------------------------------- se.exec_start : 2499611106.982616 se.vruntime : 7952.917943 se.sum_exec_runtime : 58651.279127 se.nr_migrations : 53355 nr_switches : 169561 nr_voluntary_switches : 168185 nr_involuntary_switches : 1376 se.load.weight : 1048576 se.avg.load_sum : 343837 se.avg.util_sum : 338827 se.avg.load_avg : 7 se.avg.util_avg : 7 se.avg.last_update_time : 2499611106982616 policy : 0 prio : 120 clock-delta : 180 mm->numa_scan_seq : 1 numa_pages_migrated : 296 numa_preferred_nid : 0 total_numa_faults : 34 current_node=0, numa_group_id=0 numa_faults node=0 task_private=0 task_shared=23 group_private=0 group_shared=0 numa_faults node=1 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=2 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=3 task_private=0 task_shared=11 group_private=0 group_shared=0 numa_faults node=4 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=5 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=6 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=7 task_private=0 task_shared=0 group_private=0 group_shared=0
i have been trying figure out differences between migrations , switches, responses here , here. summarizing these responses:
nr_switches
: number of context switches.nr_voluntary_switches
: number of voluntary switches, i.e. thread blocked , hence thread picked up.nr_involuntary_switches
: scheduler kicked thread out there hungry thread ready run.
therefore, migrations
? these concepts related or not? migrations among cores , switches within core?
migration when thread, after context switch, scheduled on different cpu scheduled before.
edit 1:
here more info on wikipedia migration: https://en.wikipedia.org/wiki/process_migration
here kernel code increasing counter: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c#l1175
if (task_cpu(p) != new_cpu) { ... p->se.nr_migrations++;
edit 2:
a thread can migrate cpu in following cases:
- during
exec()
- during
fork()
- during thread wake-up.
- if thread affinity mask has changed.
- when current cpu getting offline.
for more info please have @ functions set_task_cpu()
, move_queued_task()
, migrate_tasks()
in same source file: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c
the policies scheduler follows described in select_task_rq()
, depend on class of scheduler using. basic version of policier:
if (p->nr_cpus_allowed > 1) cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags); else cpu = cpumask_any(&p->cpus_allowed);
source: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c#l1534
so in order avoid migration, set cpu affinity mask threads using sched_setaffinity(2)
system call or corresponding posix api pthread_setaffinity_np(3)
.
here definition of select_task_rq() fair scheduler: https://github.com/torvalds/linux/blob/master/kernel/sched/fair.c#l5860
the logic quite complicated, basically, either select sibling idle cpu or find least busy new one.
hope answers question.
Comments
Post a Comment