diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-04-02 08:53:24 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-04-02 08:53:24 -0700 |
commit | deb74f5ca1f22f9e1c5da93143a250dbb96535af (patch) | |
tree | ee9eb01e5433ea50f3414a469521a4c1cfaccb7a | |
parent | dd775ae2549217d3ae09363e3edb305d0fa19928 (diff) | |
parent | 615399c84d1b8d8d8752629e5e5ab4e5044d6918 (diff) |
Merge tag 'for-linus' of git://github.com/rustyrussell/linux
Pull cpumask cleanups from Rusty Russell:
"(Somehow forgot to send this out; it's been sitting in linux-next, and
if you don't want it, it can sit there another cycle)"
I'm a sucker for things that actually delete lines of code.
Fix up trivial conflict in arch/arm/kernel/kprobes.c, where Rusty fixed
a user of &cpu_online_map to be cpu_online_mask, but that code got
deleted by commit b21d55e98ac2 ("ARM: 7332/1: extract out code patch
function from kprobes").
* tag 'for-linus' of git://github.com/rustyrussell/linux:
cpumask: remove old cpu_*_map.
documentation: remove references to cpu_*_map.
drivers/cpufreq/db8500-cpufreq: remove references to cpu_*_map.
remove references to cpu_*_map in arch/
28 files changed, 79 insertions, 90 deletions
diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt index 5c51ed406d1..cefd3d8bbd1 100644 --- a/Documentation/cgroups/cpusets.txt +++ b/Documentation/cgroups/cpusets.txt @@ -217,7 +217,7 @@ and name space for cpusets, with a minimum of additional kernel code. The cpus and mems files in the root (top_cpuset) cpuset are read-only. The cpus file automatically tracks the value of -cpu_online_map using a CPU hotplug notifier, and the mems file +cpu_online_mask using a CPU hotplug notifier, and the mems file automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e., nodes with memory--using the cpuset_track_online_nodes() hook. diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt index a20bfd415e4..66ef8f35613 100644 --- a/Documentation/cpu-hotplug.txt +++ b/Documentation/cpu-hotplug.txt @@ -47,7 +47,7 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using other cpus later online, read FAQ's for more info. additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets - cpu_possible_map = cpu_present_map + additional_cpus + cpu_possible_mask = cpu_present_mask + additional_cpus cede_offline={"off","on"} Use this option to disable/enable putting offlined processors to an extended H_CEDE state on @@ -64,11 +64,11 @@ should only rely on this to count the # of cpus, but *MUST* not rely on the apicid values in those tables for disabled apics. In the event BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could use this parameter "additional_cpus=x" to represent those cpus in the -cpu_possible_map. +cpu_possible_mask. possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus. This option sets possible_cpus bits in - cpu_possible_map. Thus keeping the numbers of bits set + cpu_possible_mask. Thus keeping the numbers of bits set constant even if the machine gets rebooted. CPU maps and such @@ -76,7 +76,7 @@ CPU maps and such [More on cpumaps and primitive to manipulate, please check include/linux/cpumask.h that has more descriptive text.] -cpu_possible_map: Bitmap of possible CPUs that can ever be available in the +cpu_possible_mask: Bitmap of possible CPUs that can ever be available in the system. This is used to allocate some boot time memory for per_cpu variables that aren't designed to grow/shrink as CPUs are made available or removed. Once set during boot time discovery phase, the map is static, i.e no bits @@ -84,13 +84,13 @@ are added or removed anytime. Trimming it accurately for your system needs upfront can save some boot time memory. See below for how we use heuristics in x86_64 case to keep this under check. -cpu_online_map: Bitmap of all CPUs currently online. Its set in __cpu_up() +cpu_online_mask: Bitmap of all CPUs currently online. Its set in __cpu_up() after a cpu is available for kernel scheduling and ready to receive interrupts from devices. Its cleared when a cpu is brought down using __cpu_disable(), before which all OS services including interrupts are migrated to another target CPU. -cpu_present_map: Bitmap of CPUs currently present in the system. Not all +cpu_present_mask: Bitmap of CPUs currently present in the system. Not all of them may be online. When physical hotplug is processed by the relevant subsystem (e.g ACPI) can change and new bit either be added or removed from the map depending on the event is hot-add/hot-remove. There are currently @@ -99,22 +99,22 @@ at which time hotplug is disabled. You really dont need to manipulate any of the system cpu maps. They should be read-only for most use. When setting up per-cpu resources almost always use -cpu_possible_map/for_each_possible_cpu() to iterate. +cpu_possible_mask/for_each_possible_cpu() to iterate. Never use anything other than cpumask_t to represent bitmap of CPUs. #include <linux/cpumask.h> - for_each_possible_cpu - Iterate over cpu_possible_map - for_each_online_cpu - Iterate over cpu_online_map - for_each_present_cpu - Iterate over cpu_present_map + for_each_possible_cpu - Iterate over cpu_possible_mask + for_each_online_cpu - Iterate over cpu_online_mask + for_each_present_cpu - Iterate over cpu_present_mask for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask. #include <linux/cpu.h> get_online_cpus() and put_online_cpus(): The above calls are used to inhibit cpu hotplug operations. While the -cpu_hotplug.refcount is non zero, the cpu_online_map will not change. +cpu_hotplug.refcount is non zero, the cpu_online_mask will not change. If you merely need to avoid cpus going away, you could also use preempt_disable() and preempt_enable() for those sections. Just remember the critical section cannot call any diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c index 4087a569b43..50d438db1f6 100644 --- a/arch/alpha/kernel/smp.c +++ b/arch/alpha/kernel/smp.c @@ -450,7 +450,7 @@ setup_smp(void) smp_num_probed = 1; } - printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_map = %lx\n", + printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_mask = %lx\n", smp_num_probed, cpumask_bits(cpu_present_mask)[0]); } diff --git a/arch/arm/kernel/kprobes.c b/arch/arm/kernel/kprobes.c index ab1869dac97..4dd41fc9e23 100644 --- a/arch/arm/kernel/kprobes.c +++ b/arch/arm/kernel/kprobes.c @@ -152,7 +152,7 @@ int __kprobes __arch_disarm_kprobe(void *p) void __kprobes arch_disarm_kprobe(struct kprobe *p) { - stop_machine(__arch_disarm_kprobe, p, &cpu_online_map); + stop_machine(__arch_disarm_kprobe, p, cpu_online_mask); } void __kprobes arch_remove_kprobe(struct kprobe *p) diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 2cee7d1eb95..addbbe8028c 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -349,7 +349,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) * re-initialize the map in platform_smp_prepare_cpus() if * present != possible (e.g. physical hotplug). */ - init_cpu_present(&cpu_possible_map); + init_cpu_present(cpu_possible_mask); /* * Initialise the SCU if there are more than one CPU @@ -581,8 +581,9 @@ void smp_send_stop(void) unsigned long timeout; if (num_online_cpus() > 1) { - cpumask_t mask = cpu_online_map; - cpu_clear(smp_processor_id(), mask); + struct cpumask mask; + cpumask_copy(&mask, cpu_online_mask); + cpumask_clear_cpu(smp_processor_id(), &mask); smp_cross_call(&mask, IPI_CPU_STOP); } diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c index 15d1fd22bbc..9b44a9e2d05 100644 --- a/arch/hexagon/kernel/smp.c +++ b/arch/hexagon/kernel/smp.c @@ -35,7 +35,7 @@ #define BASE_IPI_IRQ 26 /* - * cpu_possible_map needs to be filled out prior to setup_per_cpu_areas + * cpu_possible_mask needs to be filled out prior to setup_per_cpu_areas * (which is prior to any of our smp_prepare_cpu crap), in order to set * up the... per_cpu areas. */ @@ -208,7 +208,7 @@ int __cpuinit __cpu_up(unsigned int cpu) stack_start = ((void *) thread) + THREAD_SIZE; __vmstart(start_secondary, stack_start); - while (!cpu_isset(cpu, cpu_online_map)) + while (!cpu_online(cpu)) barrier(); return 0; @@ -229,7 +229,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) /* Right now, let's just fake it. */ for (i = 0; i < max_cpus; i++) - cpu_set(i, cpu_present_map); + set_cpu_present(i, true); /* Also need to register the interrupts for IPI */ if (max_cpus > 1) @@ -269,5 +269,5 @@ void smp_start_cpus(void) int i; for (i = 0; i < NR_CPUS; i++) - cpu_set(i, cpu_possible_map); + set_cpu_possible(i, true); } diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c index ac795d311f4..6f38b6120d9 100644 --- a/arch/ia64/kernel/acpi.c +++ b/arch/ia64/kernel/acpi.c @@ -839,7 +839,7 @@ static __init int setup_additional_cpus(char *s) early_param("additional_cpus", setup_additional_cpus); /* - * cpu_possible_map should be static, it cannot change as CPUs + * cpu_possible_mask should be static, it cannot change as CPUs * are onlined, or offlined. The reason is per-cpu data-structures * are allocated by some modules at init time, and dont expect to * do this dynamically on cpu arrival/departure. diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c index c3e2b85c3b0..97e7ce9b50e 100644 --- a/arch/mips/cavium-octeon/smp.c +++ b/arch/mips/cavium-octeon/smp.c @@ -78,7 +78,7 @@ static inline void octeon_send_ipi_mask(const struct cpumask *mask, } /** - * Detect available CPUs, populate cpu_possible_map + * Detect available CPUs, populate cpu_possible_mask */ static void octeon_smp_hotplug_setup(void) { @@ -268,7 +268,7 @@ static int octeon_cpu_disable(void) spin_lock(&smp_reserve_lock); - cpu_clear(cpu, cpu_online_map); + set_cpu_online(cpu, false); cpu_clear(cpu, cpu_callin_map); local_irq_disable(); fixup_irqs(); diff --git a/arch/mips/kernel/mips-mt-fpaff.c b/arch/mips/kernel/mips-mt-fpaff.c index 802e6160f37..33f63bab478 100644 --- a/arch/mips/kernel/mips-mt-fpaff.c +++ b/arch/mips/kernel/mips-mt-fpaff.c @@ -173,7 +173,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len, if (retval) goto out_unlock; - cpus_and(mask, p->thread.user_cpus_allowed, cpu_possible_map); + cpumask_and(&mask, &p->thread.user_cpus_allowed, cpu_possible_mask); out_unlock: read_unlock(&tasklist_lock); diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c index e309665b6c8..f8b2c592514 100644 --- a/arch/mips/kernel/proc.c +++ b/arch/mips/kernel/proc.c @@ -25,7 +25,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) int i; #ifdef CONFIG_SMP - if (!cpu_isset(n, cpu_online_map)) + if (!cpu_online(n)) return 0; #endif diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c index ca673569fd2..3046e298600 100644 --- a/arch/mips/kernel/smp-bmips.c +++ b/arch/mips/kernel/smp-bmips.c @@ -317,7 +317,7 @@ static int bmips_cpu_disable(void) pr_info("SMP: CPU%d is offline\n", cpu); - cpu_clear(cpu, cpu_online_map); + set_cpu_online(cpu, false); cpu_clear(cpu, cpu_callin_map); local_flush_tlb_all(); diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c index 9c1cce9de35..ba9376bf52a 100644 --- a/arch/mips/kernel/smp.c +++ b/arch/mips/kernel/smp.c @@ -148,7 +148,7 @@ static void stop_this_cpu(void *dummy) /* * Remove this CPU: */ - cpu_clear(smp_processor_id(), cpu_online_map); + set_cpu_online(smp_processor_id(), false); for (;;) { if (cpu_wait) (*cpu_wait)(); /* Wait if available. */ @@ -174,7 +174,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) mp_ops->prepare_cpus(max_cpus); set_cpu_sibling_map(0); #ifndef CONFIG_HOTPLUG_CPU - init_cpu_present(&cpu_possible_map); + init_cpu_present(cpu_possible_mask); #endif } @@ -248,7 +248,7 @@ int __cpuinit __cpu_up(unsigned int cpu) while (!cpu_isset(cpu, cpu_callin_map)) udelay(100); - cpu_set(cpu, cpu_online_map); + set_cpu_online(cpu, true); return 0; } @@ -320,13 +320,12 @@ void flush_tlb_mm(struct mm_struct *mm) if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { smp_on_other_tlbs(flush_tlb_mm_ipi, mm); } else { - cpumask_t mask = cpu_online_map; unsigned int cpu; - cpu_clear(smp_processor_id(), mask); - for_each_cpu_mask(cpu, mask) - if (cpu_context(cpu, mm)) + for_each_online_cpu(cpu) { + if (cpu != smp_processor_id() && cpu_context(cpu, mm)) cpu_context(cpu, mm) = 0; + } } local_flush_tlb_mm(mm); @@ -360,13 +359,12 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l smp_on_other_tlbs(flush_tlb_range_ipi, &fd); } else { - cpumask_t mask = cpu_online_map; unsigned int cpu; - cpu_clear(smp_processor_id(), mask); - for_each_cpu_mask(cpu, mask) - if (cpu_context(cpu, mm)) + for_each_online_cpu(cpu) { + if (cpu != smp_processor_id() && cpu_context(cpu, mm)) cpu_context(cpu, mm) = 0; + } } local_flush_tlb_range(vma, start, end); preempt_enable(); @@ -407,13 +405,12 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) smp_on_other_tlbs(flush_tlb_page_ipi, &fd); } else { - cpumask_t mask = cpu_online_map; unsigned int cpu; - cpu_clear(smp_processor_id(), mask); - for_each_cpu_mask(cpu, mask) - if (cpu_context(cpu, vma->vm_mm)) + for_each_online_cpu(cpu) { + if (cpu != smp_processor_id() && cpu_context(cpu, vma->vm_mm)) cpu_context(cpu, vma->vm_mm) = 0; + } } local_flush_tlb_page(vma, page); preempt_enable(); diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c index c4f75bbc0bd..f5dd38f1d01 100644 --- a/arch/mips/kernel/smtc.c +++ b/arch/mips/kernel/smtc.c @@ -291,7 +291,7 @@ static void smtc_configure_tlb(void) * possibly leave some TCs/VPEs as "slave" processors. * * Use c0_MVPConf0 to find out how many TCs are available, setting up - * cpu_possible_map and the logical/physical mappings. + * cpu_possible_mask and the logical/physical mappings. */ int __init smtc_build_cpu_map(int start_cpu_slot) diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c index 1f9ca07f53c..47037ec5589 100644 --- a/arch/mips/mm/c-octeon.c +++ b/arch/mips/mm/c-octeon.c @@ -80,9 +80,9 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma) if (vma) mask = *mm_cpumask(vma->vm_mm); else - mask = cpu_online_map; - cpu_clear(cpu, mask); - for_each_cpu_mask(cpu, mask) + mask = *cpu_online_mask; + cpumask_clear_cpu(cpu, &mask); + for_each_cpu(cpu, &mask) octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH); preempt_enable(); diff --git a/arch/mips/netlogic/common/smp.c b/arch/mips/netlogic/common/smp.c index db17f49886c..fab316de57e 100644 --- a/arch/mips/netlogic/common/smp.c +++ b/arch/mips/netlogic/common/smp.c @@ -165,7 +165,7 @@ void __init nlm_smp_setup(void) cpu_set(boot_cpu, phys_cpu_present_map); __cpu_number_map[boot_cpu] = 0; __cpu_logical_map[0] = boot_cpu; - cpu_set(0, cpu_possible_map); + set_cpu_possible(0, true); num_cpus = 1; for (i = 0; i < NR_CPUS; i++) { @@ -177,14 +177,14 @@ void __init nlm_smp_setup(void) cpu_set(i, phys_cpu_present_map); __cpu_number_map[i] = num_cpus; __cpu_logical_map[num_cpus] = i; - cpu_set(num_cpus, cpu_possible_map); + set_cpu_possible(num_cpus, true); ++num_cpus; } } pr_info("Phys CPU present map: %lx, possible map %lx\n", (unsigned long)phys_cpu_present_map.bits[0], - (unsigned long)cpu_possible_map.bits[0]); + (unsigned long)cpumask_bits(cpu_possible_mask)[0]); pr_info("Detected %i Slave CPU(s)\n", num_cpus); nlm_set_nmi_handler(nlm_boot_secondary_cpus); diff --git a/arch/mips/pmc-sierra/yosemite/smp.c b/arch/mips/pmc-sierra/yosemite/smp.c index 2608752898c..b71fae23104 100644 --- a/arch/mips/pmc-sierra/yosemite/smp.c +++ b/arch/mips/pmc-sierra/yosemite/smp.c @@ -146,7 +146,7 @@ static void __cpuinit yos_boot_secondary(int cpu, struct task_struct *idle) } /* - * Detect available CPUs, populate cpu_possible_map before smp_init + * Detect available CPUs, populate cpu_possible_mask before smp_init * * We don't want to start the secondary CPU yet nor do we have a nice probing * feature in PMON so we just assume presence of the secondary core. @@ -155,10 +155,10 @@ static void __init yos_smp_setup(void) { int i; - cpus_clear(cpu_possible_map); + init_cpu_possible(cpu_none_mask); for (i = 0; i < 2; i++) { - cpu_set(i, cpu_possible_map); + set_cpu_possible(i, true); __cpu_number_map[i] = i; __cpu_logical_map[i] = i; } @@ -169,7 +169,7 @@ static void __init yos_prepare_cpus(unsigned int max_cpus) /* * Be paranoid. Enable the IPI only if we're really about to go SMP. */ - if (cpus_weight(cpu_possible_map)) + if (num_possible_cpus()) set_c0_status(STATUSF_IP5); } diff --git a/arch/mips/sgi-ip27/ip27-smp.c b/arch/mips/sgi-ip27/ip27-smp.c index c6851df9ab7..735b43bf8f8 100644 --- a/arch/mips/sgi-ip27/ip27-smp.c +++ b/arch/mips/sgi-ip27/ip27-smp.c @@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, nasid_t nasid, int highest) /* Only let it join in if it's marked enabled */ if ((acpu->cpu_info.flags & KLINFO_ENABLE) && (tot_cpus_found != NR_CPUS)) { - cpu_set(cpuid, cpu_possible_map); + set_cpu_possible(cpuid, true); alloc_cpupda(cpuid, tot_cpus_found); cpus_found++; tot_cpus_found++; diff --git a/arch/mips/sibyte/bcm1480/smp.c b/arch/mips/sibyte/bcm1480/smp.c index d667875be56..de88e22694a 100644 --- a/arch/mips/sibyte/bcm1480/smp.c +++ b/arch/mips/sibyte/bcm1480/smp.c @@ -138,7 +138,7 @@ static void __cpuinit bcm1480_boot_secondary(int cpu, struct task_struct *idle) /* * Use CFE to find out how many CPUs are available, setting up - * cpu_possible_map and the logical/physical mappings. + * cpu_possible_mask and the logical/physical mappings. * XXXKW will the boot CPU ever not be physical 0? * * Common setup before any secondaries are started @@ -147,14 +147,13 @@ static void __init bcm1480_smp_setup(void) { int i, num; - cpus_clear(cpu_possible_map); - cpu_set(0, cpu_possible_map); + init_cpu_possible(cpumask_of(0)); __cpu_number_map[0] = 0; __cpu_logical_map[0] = 0; for (i = 1, num = 0; i < NR_CPUS; i++) { if (cfe_cpu_stop(i) == 0) { - cpu_set(i, cpu_possible_map); + set_cpu_possible(i, true); __cpu_number_map[i] = ++num; __cpu_logical_map[num] = i; } diff --git a/arch/mips/sibyte/sb1250/smp.c b/arch/mips/sibyte/sb1250/smp.c index 38e7f6bd792..285cfef4ebc 100644 --- a/arch/mips/sibyte/sb1250/smp.c +++ b/arch/mips/sibyte/sb1250/smp.c @@ -126,7 +126,7 @@ static void __cpuinit sb1250_boot_secondary(int cpu, struct task_struct *idle) /* * Use CFE to find out how many CPUs are available, setting up - * cpu_possible_map and the logical/physical mappings. + * cpu_possible_mask and the logical/physical mappings. * XXXKW will the boot CPU ever not be physical 0? * * Common setup before any secondaries are started @@ -135,14 +135,13 @@ static void __init sb1250_smp_setup(void) { int i, num; - cpus_clear(cpu_possible_map); - cpu_set(0, cpu_possible_map); + init_cpu_possible(cpumask_of(0)); __cpu_number_map[0] = 0; __cpu_logical_map[0] = 0; for (i = 1, num = 0; i < NR_CPUS; i++) { if (cfe_cpu_stop(i) == 0) { - cpu_set(i, cpu_possible_map); + set_cpu_possible(i, true); __cpu_number_map[i] = ++num; __cpu_logical_map[num] = i; } diff --git a/arch/sparc/kernel/leon_kernel.c b/arch/sparc/kernel/leon_kernel.c index a19c8a06368..35e43673c45 100644 --- a/arch/sparc/kernel/leon_kernel.c +++ b/arch/sparc/kernel/leon_kernel.c @@ -104,11 +104,11 @@ static int irq_choose_cpu(const struct cpumask *affinity) { cpumask_t mask; - cpus_and(mask, cpu_online_map, *affinity); - if (cpus_equal(mask, cpu_online_map) || cpus_empty(mask)) + cpumask_and(&mask, cpu_online_mask, affinity); + if (cpumask_equal(&mask, cpu_online_mask) || cpumask_empty(&mask)) return boot_cpu_id; else - return first_cpu(mask); + return cpumask_first(&mask); } #else #define irq_choose_cpu(affinity) boot_cpu_id diff --git a/arch/tile/kernel/setup.c b/arch/tile/kernel/setup.c index 5f85d8b34db..92a94f4920a 100644 --- a/arch/tile/kernel/setup.c +++ b/arch/tile/kernel/setup.c @@ -1100,7 +1100,7 @@ EXPORT_SYMBOL(hash_for_home_map); /* * cpu_cacheable_map lists all the cpus whose caches the hypervisor can - * flush on our behalf. It is set to cpu_possible_map OR'ed with + * flush on our behalf. It is set to cpu_possible_mask OR'ed with * hash_for_home_map, and it is what should be passed to * hv_flush_remote() to flush all caches. Note that if there are * dedicated hypervisor driver tiles that have authorized use of their @@ -1186,7 +1186,7 @@ static void __init setup_cpu_maps(void) sizeof(cpu_lotar_map)); if (rc < 0) { pr_err("warning: no HV_INQ_TILES_LOTAR; using AVAIL\n"); - cpu_lotar_map = cpu_possible_map; + cpu_lotar_map = *cpu_possible_mask; } #if CHIP_HAS_CBOX_HOME_MAP() @@ -1196,9 +1196,9 @@ static void __init setup_cpu_maps(void) sizeof(hash_for_home_map)); if (rc < 0) early_panic("hv_inquire_tiles(HFH_CACHE) failed: rc %d\n", rc); - cpumask_or(&cpu_cacheable_map, &cpu_possible_map, &hash_for_home_map); + cpumask_or(&cpu_cacheable_map, cpu_possible_mask, &hash_for_home_map); #else - cpu_cacheable_map = cpu_possible_map; + cpu_cacheable_map = *cpu_possible_mask; #endif } diff --git a/arch/um/kernel/skas/process.c b/arch/um/kernel/skas/process.c index 2e9852c0d48..0a9e57e7446 100644 --- a/arch/um/kernel/skas/process.c +++ b/arch/um/kernel/skas/process.c @@ -41,7 +41,7 @@ static int __init start_kernel_proc(void *unused) cpu_tasks[0].pid = pid; cpu_tasks[0].task = current; #ifdef CONFIG_SMP - cpu_online_map = cpumask_of_cpu(0); + init_cpu_online(get_cpu_mask(0)); #endif start_kernel(); return 0; diff --git a/arch/um/kernel/smp.c b/arch/um/kernel/smp.c index 155206a6690..6f588e160fb 100644 --- a/arch/um/kernel/smp.c +++ b/arch/um/kernel/smp.c @@ -76,7 +76,7 @@ static int idle_proc(void *cpup) cpu_relax(); notify_cpu_starting(cpu); - cpu_set(cpu, cpu_online_map); + set_cpu_online(cpu, true); default_idle(); return 0; } @@ -110,8 +110,7 @@ void smp_prepare_cpus(unsigned int maxcpus) for (i = 0; i < ncpus; ++i) set_cpu_possible(i, true); - cpu_clear(me, cpu_online_map); - cpu_set(me, cpu_online_map); + set_cpu_online(me, true); cpu_set(me, cpu_callin_map); err = os_pipe(cpu_data[me].ipi_pipe, 1, 1); @@ -138,13 +137,13 @@ void smp_prepare_cpus(unsigned int maxcpus) void smp_prepare_boot_cpu(void) { - cpu_set(smp_processor_id(), cpu_online_map); + set_cpu_online(smp_processor_id(), true); } int __cpu_up(unsigned int cpu) { cpu_set(cpu, smp_commenced_mask); - while (!cpu_isset(cpu, cpu_online_map)) + while (!cpu_online(cpu)) mb(); return 0; } diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index b132ade26f7..4f51bebac02 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -967,7 +967,7 @@ void xen_setup_shared_info(void) xen_setup_mfn_list_list(); } -/* This is called once we have the cpu_possible_map */ +/* This is called once we have the cpu_possible_mask */ void xen_setup_vcpu_info_placement(void) { int cpu; diff --git a/drivers/cpufreq/db8500-cpufreq.c b/drivers/cpufreq/db8500-cpufreq.c index a22ffa5bff9..0bf1b8910ee 100644 --- a/drivers/cpufreq/db8500-cpufreq.c +++ b/drivers/cpufreq/db8500-cpufreq.c @@ -142,7 +142,7 @@ static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = 20 * 1000; /* in ns */ /* policy sharing between dual CPUs */ - cpumask_copy(policy->cpus, &cpu_present_map); + cpumask_copy(policy->cpus, cpu_present_mask); policy->shared_type = CPUFREQ_SHARED_TYPE_ALL; diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 1ffdb9856bb..a2c819d3c96 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -764,12 +764,6 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu) * */ #ifndef CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS -/* These strip const, as traditionally they weren't const. */ -#define cpu_possible_map (*(cpumask_t *)cpu_possible_mask) -#define cpu_online_map (*(cpumask_t *)cpu_online_mask) -#define cpu_present_map (*(cpumask_t *)cpu_present_mask) -#define cpu_active_map (*(cpumask_t *)cpu_active_mask) - #define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu)) #define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS) diff --git a/init/Kconfig b/init/Kconfig index 72f33faca44..6cfd71d0646 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1414,8 +1414,8 @@ endif # MODULES config INIT_ALL_POSSIBLE bool help - Back when each arch used to define their own cpu_online_map and - cpu_possible_map, some of them chose to initialize cpu_possible_map + Back when each arch used to define their own cpu_online_mask and + cpu_possible_mask, some of them chose to initialize cpu_possible_mask with all 1s, and others with all 0s. When they were centralised, it was better to provide this option than to break all the archs and have several arch maintainers pursuing me down dark alleys. diff --git a/kernel/cpuset.c b/kernel/cpuset.c index b96ad75b7e6..14f7070b4ba 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -270,11 +270,11 @@ static struct file_system_type cpuset_fs_type = { * are online. If none are online, walk up the cpuset hierarchy * until we find one that does have some online cpus. If we get * all the way to the top and still haven't found any online cpus, - * return cpu_online_map. Or if passed a NULL cs from an exit'ing - * task, return cpu_online_map. + * return cpu_online_mask. Or if passed a NULL cs from an exit'ing + * task, return cpu_online_mask. * * One way or another, we guarantee to return some non-empty subset - * of cpu_online_map. + * of cpu_online_mask. * * Call with callback_mutex held. */ @@ -867,7 +867,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, int retval; int is_load_balanced; - /* top_cpuset.cpus_allowed tracks cpu_online_map; it's read-only */ + /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */ if (cs == &top_cpuset) return -EACCES; @@ -2149,7 +2149,7 @@ void __init cpuset_init_smp(void) * * Description: Returns the cpumask_var_t cpus_allowed of the cpuset * attached to the specified @tsk. Guaranteed to return some non-empty - * subset of cpu_online_map, even if this means going outside the + * subset of cpu_online_mask, even if this means going outside the * tasks cpuset. **/ |