summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-04-10fou: correct spelling of encapsulationSimon Horman
Correct spelling of encapsulation. Found by inspection. Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10Merge branch 'net-sched-taprio-fix-picos_per_byte-miscalculation'David S. Miller
Leandro Dorileo says: ==================== net/sched: taprio: fix picos_per_byte miscalculation This set fixes miscalculations based on invalid link speed values. Changes in v6: + Avoid locking a spinlock while calling __ethtool_get_link_ksettings() (suggested by: Cong Wang); Changes in v5: + Don't iterate over all the net_device maintained list (suggested by: Florian Fainelli); Changes in v4: + converted pr_info calls to netdev_dbg (suggested by: Florian Fainelli); Changes in v3: + yet pr_info() format warnings; Changes in v2: + fixed pr_info() format both on cbs and taprio patches; ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net/sched: cbs: fix port_rate miscalculationLeandro Dorileo
The Credit Based Shaper heavily depends on link speed to calculate the scheduling credits, we can't properly calculate the credits if the device has failed to report the link speed. In that case we can't dequeue packets assuming a wrong port rate that will result into an inconsistent credit distribution. This patch makes sure we fail to dequeue case: 1) __ethtool_get_link_ksettings() reports error or 2) the ethernet driver failed to set the ksettings' speed value (setting link speed to SPEED_UNKNOWN). Additionally we properly re calculate the port rate whenever the link speed is changed. Fixes: 3d0bd028ffb4a ("net/sched: Add support for HW offloading for CBS") Signed-off-by: Leandro Dorileo <leandro.maciel.dorileo@intel.com> Reviewed-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net/sched: taprio: fix picos_per_byte miscalculationLeandro Dorileo
The Time Aware Priority Scheduler is heavily dependent to link speed, it relies on it to calculate transmission bytes per cycle, we can't properly calculate the so called budget if the device has failed to report the link speed. In that case we can't dequeue packets assuming a wrong budget. This patch makes sure we fail to dequeue case: 1) __ethtool_get_link_ksettings() reports error or 2) the ethernet driver failed to set the ksettings' speed value (setting link speed to SPEED_UNKNOWN). Additionally we re calculate the budget whenever the link speed is changed. Fixes: 5a781ccbd19e4 ("tc: Add support for configuring the taprio scheduler") Signed-off-by: Leandro Dorileo <leandro.maciel.dorileo@intel.com> Reviewed-by: Vedang Patel <vedang.patel@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net: strparser: fix commentJakub Kicinski
Fix comment. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10ipv4: Handle RTA_GATEWAY set to 0David Ahern
Govindarajulu reported a regression with Network Manager which sends an RTA_GATEWAY attribute with the address set to 0. Fixup the handling of RTA_GATEWAY to only set fc_gw_family if the gateway address is actually set. Fixes: f35b794b3b405 ("ipv4: Prepare fib_config for IPv6 gateway") Reported-by: Govindarajulu Varadarajan <govind.varadar@gmail.com> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10Merge branch 'net-sched-move-back-qlen-to-per-CPU-accounting'David S. Miller
Paolo Abeni says: ==================== net: sched: move back qlen to per CPU accounting The commit 46b1c18f9deb ("net: sched: put back q.qlen into a single location") introduced some measurable regression in the contended scenarios for lock qdisc. As Eric suggested we could replace q.qlen access with calls to qdisc_is_empty() in the datapath and revert the above commit. The TC subsystem updates qdisc->is_empty in a somewhat loose way: notably 'is_empty' is set only when the qdisc dequeue() calls return a NULL ptr. That is, the invocation after the last packet is dequeued. The above is good enough for BYPASS implementation - the only downside is that we end up avoiding the optimization for a very small time-frame - but will break hard things when internal structures consistency for classful qdisc relies on child qdisc_is_empty(). A more strict 'is_empty' update adds a relevant complexity to its life-cycle, so this series takes a different approach: we allow lockless qdisc to switch from per CPU accounting to global stats accounting when the NOLOCK bit is cleared. Since most pieces of infrastructure are already in place, this requires very little changes to the pfifo_fast qdisc, and any later NOLOCK qdisc can hook there with little effort - no need to maintain two different implementations. The first 2 patches removes direct qlen access from non core TC code, the 3rd and 4th patches place and use the infrastructure to allow stats account switching and the 5th patch is the actual revert. v1 -> v2: - fixed build issues - more descriptive commit message for patch 5/5 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10Revert: "net: sched: put back q.qlen into a single location"Paolo Abeni
This revert commit 46b1c18f9deb ("net: sched: put back q.qlen into a single location"). After the previous patch, when a NOLOCK qdisc is enslaved to a locking qdisc it switches to global stats accounting. As a consequence, when a classful qdisc accesses directly a child qdisc's qlen, such qdisc is not doing per CPU accounting and qlen value is consistent. In the control path nobody uses directly qlen since commit e5f0e8f8e45 ("net: sched: introduce and use qdisc tree flush/purge helpers"), so we can remove the contented atomic ops from the datapath. v1 -> v2: - complete the qdisc_qstats_atomic_qlen_dec() -> qdisc_qstats_cpu_qlen_dec() replacement, fix build issue - more descriptive commit message Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net: sched: when clearing NOLOCK, clear TCQ_F_CPUSTATS, tooPaolo Abeni
Since stats updating is always consistent with TCQ_F_CPUSTATS flag, we can disable it at qdisc creation time flipping such bit. In my experiments, if the NOLOCK flag is cleared, per CPU stats accounting does not give any measurable performance gain, but it waste some memory. Let's clear TCQ_F_CPUSTATS together with NOLOCK, when enslaving a NOLOCK qdisc to 'lock' one. Use stats update helper inside pfifo_fast, to cope correctly with TCQ_F_CPUSTATS flag change. As a side effect, q.qlen value for any child qdiscs is always consistent for all lock classfull qdiscs. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net: sched: always do stats accounting according to TCQ_F_CPUSTATSPaolo Abeni
The core sched implementation checks independently for NOLOCK flag to acquire/release the root spin lock and for qdisc_is_percpu_stats() to account per CPU values in many places. This change update the last few places checking the TCQ_F_NOLOCK to do per CPU stats accounting according to qdisc_is_percpu_stats() value. The above allows to clean dev_requeue_skb() implementation a bit and makes stats update always consistent with a single flag. v1 -> v2: - do not move qdisc_is_empty definition, fix build issue Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net: sched: prefer qdisc_is_empty() over direct qlen accessPaolo Abeni
When checking for root qdisc queue length, do not access directly q.qlen. In the following patches we will move back qlen accounting to per CPU values for NOLOCK qdiscs. Instead, prefer the qdisc_is_empty() helper usage. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10net: caif: avoid using qdisc_qlen()Paolo Abeni
Such helper does not cope correctly with NOLOCK qdiscs. In the following patches we will move back qlen to per CPU values for such qdiscs, so qdisc_qlen_sum() is not an option, too. Instead, use qlen only for lock qdiscs, and always set flow off for NOLOCK qdiscs with a not empty tx queue. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-10libbpf: fix crash in XDP socket part with new larger BPF_LOG_BUF_SIZEMagnus Karlsson
In commit da11b417583e ("libbpf: teach libbpf about log_level bit 2"), the BPF_LOG_BUF_SIZE was increased to 16M. The XDP socket part of libbpf allocated the log_buf on the stack, but for the new 16M buffer size this is not going to work. Change the code so it uses a 16K buffer instead. Fixes: da11b417583e ("libbpf: teach libbpf about log_level bit 2") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-10bpf, bpftool: fix a few ubsan warningsYonghong Song
The issue is reported at https://github.com/libbpf/libbpf/issues/28. Basically, per C standard, for void *memcpy(void *dest, const void *src, size_t n) if "dest" or "src" is NULL, regardless of whether "n" is 0 or not, the result of memcpy is undefined. clang ubsan reported three such instances in bpf.c with the following pattern: memcpy(dest, 0, 0). Although in practice, no known compiler will cause issues when copy size is 0. Let us still fix the issue to silence ubsan warnings. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-09Merge branch 'support-global-data'Alexei Starovoitov
Daniel Borkmann says: ==================== This series is a major rework of previously submitted libbpf patches [0] in order to add global data support for BPF. The kernel has been extended to add proper infrastructure that allows for full .bss/.data/.rodata sections on BPF loader side based upon feedback from LPC discussions [1]. Latter support is then also added into libbpf in this series which allows for more natural C-like programming of BPF programs. For more information on loader, please refer to 'bpf, libbpf: support global data/bss/ rodata sections' patch in this series. Thanks a lot! v5 -> v6: - Removed synchronize_rcu() from map freeze (Jann) - Rest as-is v4 -> v5: - Removed index selection again for ldimm64 (Alexei) - Adapted related test cases and added new ones to test rejection of off != 0 v3 -> v4: - Various fixes in BTF verification e.g. to disallow Var and DataSec to be an intermediate type during resolve (Martin) - More BTF test cases added - Few cleanups in key-less BTF commit (Martin) - Bump libbpf minor version from 2 to 3 - Renamed and simplified read-only locking - Various minor improvements all over the place v2 -> v3: - Implement BTF support in kernel, libbpf, bpftool, add tests - Fix idx + off conversion (Andrii) - Document lower / higher bits for direct value access (Andrii) - Add tests with small value size (Andrii) - Add index selection into ldimm64 (Andrii) - Fix missing fdput() (Jann) - Reject invalid flags in BPF_F_*_PROG (Jakub) - Complete rework of libbpf support, includes: - Add objname to map name (Stanislav) - Make .rodata map full read-only after setup (Andrii) - Merge relocation handling into single one (Andrii) - Store global maps into obj->maps array (Andrii, Alexei) - Debug message when skipping section (Andrii) - Reject non-static global data till we have semantics for sharing them (Yonghong, Andrii, Alexei) - More test cases and completely reworked prog test (Alexei) - Fixes, cleanups, etc all over the set - Not yet addressed: - Make BTF mandatory for these maps (Alexei) -> Waiting till BTF support for these lands first v1 -> v2: - Instead of 32-bit static data, implement full global data support (Alexei) [0] https://patchwork.ozlabs.org/cover/1040290/ [1] http://vger.kernel.org/lpc-bpf2018.html#session-3 ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, selftest: add test cases for BTF Var and DataSecDaniel Borkmann
Extend test_btf with various positive and negative tests around BTF verification of kind Var and DataSec. All passing as well: # ./test_btf [...] BTF raw test[4] (global data test #1): OK BTF raw test[5] (global data test #2): OK BTF raw test[6] (global data test #3): OK BTF raw test[7] (global data test #4, unsupported linkage): OK BTF raw test[8] (global data test #5, invalid var type): OK BTF raw test[9] (global data test #6, invalid var type (fwd type)): OK BTF raw test[10] (global data test #7, invalid var type (fwd type)): OK BTF raw test[11] (global data test #8, invalid var size): OK BTF raw test[12] (global data test #9, invalid var size): OK BTF raw test[13] (global data test #10, invalid var size): OK BTF raw test[14] (global data test #11, multiple section members): OK BTF raw test[15] (global data test #12, invalid offset): OK BTF raw test[16] (global data test #13, invalid offset): OK BTF raw test[17] (global data test #14, invalid offset): OK BTF raw test[18] (global data test #15, not var kind): OK BTF raw test[19] (global data test #16, invalid var referencing sec): OK BTF raw test[20] (global data test #17, invalid var referencing var): OK BTF raw test[21] (global data test #18, invalid var loop): OK BTF raw test[22] (global data test #19, invalid var referencing var): OK BTF raw test[23] (global data test #20, invalid ptr referencing var): OK BTF raw test[24] (global data test #21, var included in struct): OK BTF raw test[25] (global data test #22, array of var): OK [...] PASS:167 SKIP:0 FAIL:0 Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, selftest: test global data/bss/rodata sectionsJoe Stringer
Add tests for libbpf relocation of static variable references into the .data, .rodata and .bss sections of the ELF, also add read-only test for .rodata. All passing: # ./test_progs [...] test_global_data:PASS:load program 0 nsec test_global_data:PASS:pass global data run 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .data reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .data reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_string:PASS:relocate .rodata reference 925 nsec test_global_data_string:PASS:relocate .data reference 925 nsec test_global_data_string:PASS:relocate .bss reference 925 nsec test_global_data_string:PASS:relocate .data reference 925 nsec test_global_data_string:PASS:relocate .bss reference 925 nsec test_global_data_struct:PASS:relocate .rodata reference 925 nsec test_global_data_struct:PASS:relocate .bss reference 925 nsec test_global_data_struct:PASS:relocate .rodata reference 925 nsec test_global_data_struct:PASS:relocate .data reference 925 nsec test_global_data_rdonly:PASS:test .rodata read-only map 925 nsec [...] Summary: 229 PASSED, 0 FAILED Note map helper signatures have been changed to avoid warnings when passing in const data. Joint work with Daniel Borkmann. Signed-off-by: Joe Stringer <joe@wand.net.nz> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, selftest: test {rd, wr}only flags and direct value accessDaniel Borkmann
Extend test_verifier with various test cases around the two kernel extensions, that is, {rd,wr}only map support as well as direct map value access. All passing, one skipped due to xskmap not present on test machine: # ./test_verifier [...] #948/p XDP pkt read, pkt_meta' <= pkt_data, bad access 1 OK #949/p XDP pkt read, pkt_meta' <= pkt_data, bad access 2 OK #950/p XDP pkt read, pkt_data <= pkt_meta', good access OK #951/p XDP pkt read, pkt_data <= pkt_meta', bad access 1 OK #952/p XDP pkt read, pkt_data <= pkt_meta', bad access 2 OK Summary: 1410 PASSED, 1 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: bpftool support for dumping data/bss/rodata sectionsDaniel Borkmann
Add the ability to bpftool to handle BTF Var and DataSec kinds in order to dump them out of btf_dumper_type(). The value has a single object with the section name, which itself holds an array of variables it dumps. A single variable is an object by itself printed along with its name. From there further type information is dumped along with corresponding value information. Example output from .rodata: # ./bpftool m d i 150 [{ "value": { ".rodata": [{ "load_static_data.bar": 18446744073709551615 },{ "num2": 24 },{ "num5": 43947 },{ "num6": 171 },{ "str0": [97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,0,0,0,0,0,0 ] },{ "struct0": { "a": 42, "b": 4278120431, "c": 1229782938247303441 } },{ "struct2": { "a": 0, "b": 0, "c": 0 } } ] } } ] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, libbpf: add support for BTF Var and DataSecDaniel Borkmann
This adds libbpf support for BTF Var and DataSec kinds. Main point here is that libbpf needs to do some preparatory work before the whole BTF object can be loaded into the kernel, that is, fixing up of DataSec size taken from the ELF section size and non-static variable offset which needs to be taken from the ELF's string section. Upstream LLVM doesn't fix these up since at time of BTF emission it is too early in the compilation process thus this information isn't available yet, hence loader needs to take care of it. Note, deduplication handling has not been in the scope of this work and needs to be addressed in a future commit. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://reviews.llvm.org/D59441 Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, libbpf: support global data/bss/rodata sectionsDaniel Borkmann
This work adds BPF loader support for global data sections to libbpf. This allows to write BPF programs in more natural C-like way by being able to define global variables and const data. Back at LPC 2018 [0] we presented a first prototype which implemented support for global data sections by extending BPF syscall where union bpf_attr would get additional memory/size pair for each section passed during prog load in order to later add this base address into the ldimm64 instruction along with the user provided offset when accessing a variable. Consensus from LPC was that for proper upstream support, it would be more desirable to use maps instead of bpf_attr extension as this would allow for introspection of these sections as well as potential live updates of their content. This work follows this path by taking the following steps from loader side: 1) In bpf_object__elf_collect() step we pick up ".data", ".rodata", and ".bss" section information. 2) If present, in bpf_object__init_internal_map() we add maps to the obj's map array that corresponds to each of the present sections. Given section size and access properties can differ, a single entry array map is created with value size that is corresponding to the ELF section size of .data, .bss or .rodata. These internal maps are integrated into the normal map handling of libbpf such that when user traverses all obj maps, they can be differentiated from user-created ones via bpf_map__is_internal(). In later steps when we actually create these maps in the kernel via bpf_object__create_maps(), then for .data and .rodata sections their content is copied into the map through bpf_map_update_elem(). For .bss this is not necessary since array map is already zero-initialized by default. Additionally, for .rodata the map is frozen as read-only after setup, such that neither from program nor syscall side writes would be possible. 3) In bpf_program__collect_reloc() step, we record the corresponding map, insn index, and relocation type for the global data. 4) And last but not least in the actual relocation step in bpf_program__relocate(), we mark the ldimm64 instruction with src_reg = BPF_PSEUDO_MAP_VALUE where in the first imm field the map's file descriptor is stored as similarly done as in BPF_PSEUDO_MAP_FD, and in the second imm field (as ldimm64 is 2-insn wide) we store the access offset into the section. Given these maps have only single element ldimm64's off remains zero in both parts. 5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE load will then store the actual target address in order to have a 'map-lookup'-free access. That is, the actual map value base address + offset. The destination register in the verifier will then be marked as PTR_TO_MAP_VALUE, containing the fixed offset as reg->off and backing BPF map as reg->map_ptr. Meaning, it's treated as any other normal map value from verification side, only with efficient, direct value access instead of actual call to map lookup helper as in the typical case. Currently, only support for static global variables has been added, and libbpf rejects non-static global variables from loading. This can be lifted until we have proper semantics for how BPF will treat multi-object BPF loads. From BTF side, libbpf will set the value type id of the types corresponding to the ".bss", ".data" and ".rodata" names which LLVM will emit without the object name prefix. The key type will be left as zero, thus making use of the key-less BTF option in array maps. Simple example dump of program using globals vars in each section: # bpftool prog [...] 6784: sched_cls name load_static_dat tag a7e1291567277844 gpl loaded_at 2019-03-11T15:39:34+0000 uid 0 xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240 # bpftool map show id 2237 2237: array name test_glo.bss flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2235 2235: array name test_glo.data flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2236 2236: array name test_glo.rodata flags 0x80 key 4B value 96B max_entries 1 memlock 4096B # bpftool prog dump xlated id 6784 int load_static_data(struct __sk_buff * skb): ; int load_static_data(struct __sk_buff *skb) 0: (b7) r6 = 0 ; test_reloc(number, 0, &num0); 1: (63) *(u32 *)(r10 -4) = r6 2: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 3: (07) r2 += -4 ; test_reloc(number, 0, &num0); 4: (18) r1 = map[id:2238] 6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area 8: (b7) r4 = 0 9: (85) call array_map_update_elem#100464 10: (b7) r1 = 1 ; test_reloc(number, 1, &num1); [...] ; test_reloc(string, 2, str2); 120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16 122: (18) r1 = map[id:2239] 124: (18) r3 = map[id:2237][0]+16 126: (b7) r4 = 0 127: (85) call array_map_update_elem#100464 128: (b7) r1 = 120 ; str1[5] = 'x'; 129: (73) *(u8 *)(r9 +5) = r1 ; test_reloc(string, 3, str1); 130: (b7) r1 = 3 131: (63) *(u32 *)(r10 -4) = r1 132: (b7) r9 = 3 133: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 134: (07) r2 += -4 ; test_reloc(string, 3, str1); 135: (18) r1 = map[id:2239] 137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area 139: (b7) r4 = 0 140: (85) call array_map_update_elem#100464 141: (b7) r1 = 111 ; __builtin_memcpy(&str2[2], "hello", sizeof("hello")); 142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data 143: (b7) r1 = 108 144: (73) *(u8 *)(r8 +5) = r1 [...] For Cilium use-case in particular, this enables migrating configuration constants from Cilium daemon's generated header defines into global data sections such that expensive runtime recompilations with LLVM can be avoided altogether. Instead, the ELF file becomes effectively a "template", meaning, it is compiled only once (!) and the Cilium daemon will then rewrite relevant configuration data from the ELF's .data or .rodata sections directly instead of recompiling the program. The updated ELF is then loaded into the kernel and atomically replaces the existing program in the networking datapath. More info in [0]. Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail for static variables"). [0] LPC 2018, BPF track, "ELF relocation for static data in BPF", http://vger.kernel.org/lpc-bpf2018.html#session-3 Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf, libbpf: refactor relocation handlingJoe Stringer
Adjust the code for relocations slightly with no functional changes, so that upcoming patches that will introduce support for relocations into the .data, .rodata and .bss sections can be added independent of these changes. Signed-off-by: Joe Stringer <joe@wand.net.nz> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: sync {btf, bpf}.h uapi header from tools infrastructureDaniel Borkmann
Pull in latest changes from both headers, so we can make use of them in libbpf. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: allow for key-less BTF in array mapDaniel Borkmann
Given we'll be reusing BPF array maps for global data/bss/rodata sections, we need a way to associate BTF DataSec type as its map value type. In usual cases we have this ugly BPF_ANNOTATE_KV_PAIR() macro hack e.g. via 38d5d3b3d5db ("bpf: Introduce BPF_ANNOTATE_KV_PAIR") to get initial map to type association going. While more use cases for it are discouraged, this also won't work for global data since the use of array map is a BPF loader detail and therefore unknown at compilation time. For array maps with just a single entry we make an exception in terms of BTF in that key type is declared optional if value type is of DataSec type. The latter LLVM is guaranteed to emit and it also aligns with how we regard global data maps as just a plain buffer area reusing existing map facilities for allowing things like introspection with existing tools. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: kernel side support for BTF Var and DataSecDaniel Borkmann
This work adds kernel-side verification, logging and seq_show dumping of BTF Var and DataSec kinds which are emitted with latest LLVM. The following constraints apply: BTF Var must have: - Its kind_flag is 0 - Its vlen is 0 - Must point to a valid type - Type must not resolve to a forward type - Size of underlying type must be > 0 - Must have a valid name - Can only be a source type, not sink or intermediate one - Name may include dots (e.g. in case of static variables inside functions) - Cannot be a member of a struct/union - Linkage so far can either only be static or global/allocated BTF DataSec must have: - Its kind_flag is 0 - Its vlen cannot be 0 - Its size cannot be 0 - Must have a valid name - Can only be a source type, not sink or intermediate one - Name may include dots (e.g. to represent .bss, .data, .rodata etc) - Cannot be a member of a struct/union - Inner btf_var_secinfo array with {type,offset,size} triple must be sorted by offset in ascending order - Type must always point to BTF Var - BTF resolved size of Var must be <= size provided by triple - DataSec size must be >= sum of triple sizes (thus holes are allowed) btf_var_resolve(), btf_ptr_resolve() and btf_modifier_resolve() are on a high level quite similar but each come with slight, subtle differences. They could potentially be a bit refactored in future which hasn't been done here to ease review. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: add specification for BTF Var and DataSec kindsDaniel Borkmann
This adds the BTF specification and UAPI bits for supporting BTF Var and DataSec kinds. This is following LLVM upstream commit ac4082b77e07 ("[BPF] Add BTF Var and DataSec Support") which has been merged recently. Var itself is for describing a global variable and DataSec to describe ELF sections e.g. data/bss/rodata sections that hold one or multiple global variables. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: allow . char as part of the object nameDaniel Borkmann
Trivial addition to allow '.' aside from '_' as "special" characters in the object name. Used to allow for substrings in maps from loader side such as ".bss", ".data", ".rodata", but could also be useful for other purposes. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: add syscall side map freeze supportDaniel Borkmann
This patch adds a new BPF_MAP_FREEZE command which allows to "freeze" the map globally as read-only / immutable from syscall side. Map permission handling has been refactored into map_get_sys_perms() and drops FMODE_CAN_WRITE in case of locked map. Main use case is to allow for setting up .rodata sections from the BPF ELF which are loaded into the kernel, meaning BPF loader first allocates map, sets up map value by copying .rodata section into it and once complete, it calls BPF_MAP_FREEZE on the map fd to prevent further modifications. Right now BPF_MAP_FREEZE only takes map fd as argument while remaining bpf_attr members are required to be zero. I didn't add write-only locking here as counterpart since I don't have a concrete use-case for it on my side, and I think it makes probably more sense to wait once there is actually one. In that case bpf_attr can be extended as usual with a flag field and/or others where flag 0 means that we lock the map read-only hence this doesn't prevent to add further extensions to BPF_MAP_FREEZE upon need. A map creation flag like BPF_F_WRONCE was not considered for couple of reasons: i) in case of a generic implementation, a map can consist of more than just one element, thus there could be multiple map updates needed to set the map into a state where it can then be made immutable, ii) WRONCE indicates exact one-time write before it is then set immutable. A generic implementation would set a bit atomically on map update entry (if unset), indicating that every subsequent update from then onwards will need to bail out there. However, map updates can fail, so upon failure that flag would need to be unset again and the update attempt would need to be repeated for it to be eventually made immutable. While this can be made race-free, this approach feels less clean and in combination with reason i), it's not generic enough. A dedicated BPF_MAP_FREEZE command directly sets the flag and caller has the guarantee that map is immutable from syscall side upon successful return for any future syscall invocations that would alter the map state, which is also more intuitive from an API point of view. A command name such as BPF_MAP_LOCK has been avoided as it's too close with BPF map spin locks (which already has BPF_F_LOCK flag). BPF_MAP_FREEZE is so far only enabled for privileged users. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: add program side {rd, wr}only support for mapsDaniel Borkmann
This work adds two new map creation flags BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG in order to allow for read-only or write-only BPF maps from a BPF program side. Today we have BPF_F_RDONLY and BPF_F_WRONLY, but this only applies to system call side, meaning the BPF program has full read/write access to the map as usual while bpf(2) calls with map fd can either only read or write into the map depending on the flags. BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG allows for the exact opposite such that verifier is going to reject program loads if write into a read-only map or a read into a write-only map is detected. For read-only map case also some helpers are forbidden for programs that would alter the map state such as map deletion, update, etc. As opposed to the two BPF_F_RDONLY / BPF_F_WRONLY flags, BPF_F_RDONLY_PROG as well as BPF_F_WRONLY_PROG really do correspond to the map lifetime. We've enabled this generic map extension to various non-special maps holding normal user data: array, hash, lru, lpm, local storage, queue and stack. Further generic map types could be followed up in future depending on use-case. Main use case here is to forbid writes into .rodata map values from verifier side. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: do not retain flags that are not tied to map lifetimeDaniel Borkmann
Both BPF_F_WRONLY / BPF_F_RDONLY flags are tied to the map file descriptor, but not to the map object itself! Meaning, at map creation time BPF_F_RDONLY can be set to make the map read-only from syscall side, but this holds only for the returned fd, so any other fd either retrieved via bpf file system or via map id for the very same underlying map object can have read-write access instead. Given that, keeping the two flags around in the map_flags attribute and exposing them to user space upon map dump is misleading and may lead to false conclusions. Since these two flags are not tied to the map object lets also not store them as map property. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09bpf: implement lookup-free direct value access for mapsDaniel Borkmann
This generic extension to BPF maps allows for directly loading an address residing inside a BPF map value as a single BPF ldimm64 instruction! The idea is similar to what BPF_PSEUDO_MAP_FD does today, which is a special src_reg flag for ldimm64 instruction that indicates that inside the first part of the double insns's imm field is a file descriptor which the verifier then replaces as a full 64bit address of the map into both imm parts. For the newly added BPF_PSEUDO_MAP_VALUE src_reg flag, the idea is the following: the first part of the double insns's imm field is again a file descriptor corresponding to the map, and the second part of the imm field is an offset into the value. The verifier will then replace both imm parts with an address that points into the BPF map value at the given value offset for maps that support this operation. Currently supported is array map with single entry. It is possible to support more than just single map element by reusing both 16bit off fields of the insns as a map index, so full array map lookup could be expressed that way. It hasn't been implemented here due to lack of concrete use case, but could easily be done so in future in a compatible way, since both off fields right now have to be 0 and would correctly denote a map index 0. The BPF_PSEUDO_MAP_VALUE is a distinct flag as otherwise with BPF_PSEUDO_MAP_FD we could not differ offset 0 between load of map pointer versus load of map's value at offset 0, and changing BPF_PSEUDO_MAP_FD's encoding into off by one to differ between regular map pointer and map value pointer would add unnecessary complexity and increases barrier for debugability thus less suitable. Using the second part of the imm field as an offset into the value does /not/ come with limitations since maximum possible value size is in u32 universe anyway. This optimization allows for efficiently retrieving an address to a map value memory area without having to issue a helper call which needs to prepare registers according to calling convention, etc, without needing the extra NULL test, and without having to add the offset in an additional instruction to the value base pointer. The verifier then treats the destination register as PTR_TO_MAP_VALUE with constant reg->off from the user passed offset from the second imm field, and guarantees that this is within bounds of the map value. Any subsequent operations are normally treated as typical map value handling without anything extra needed from verification side. The two map operations for direct value access have been added to array map for now. In future other types could be supported as well depending on the use case. The main use case for this commit is to allow for BPF loader support for global variables that reside in .data/.rodata/.bss sections such that we can directly load the address of them with minimal additional infrastructure required. Loader support has been added in subsequent commits for libbpf library. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-09net: phy: remove unnecessary callback settings in C45 driversHeiner Kallweit
genphy_c45_aneg_done() is used by phylib as fallback for c45 PHY's if callback aneg_done isn't defined. So we don't have to set this explicitly. Same for genphy_c45_pma_read_abilities(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-09Merge branch 'netdevsim-small-spring-cleanup'David S. Miller
Jiri Pirko says: ==================== netdevsim: small spring cleanup Nothing serious, just cosmetics. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-09netdevsim: assume CONFIG_NET_DEVLINK is always enabledJiri Pirko
Since commit f6b19b354d50 ("net: devlink: select NET_DEVLINK from drivers") adds implicit select of NET_DEVLINK for netdevsim, the code does not have to deal with the case when CONFIG_NET_DEVLINK is not enabled. So remove the ifcase. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-09netdevsim: let net core to free netdevsim netdevJiri Pirko
No need to free it ourselves, just set the "needs_free_netdev" flag and leave the work to net core. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-09netdevsim: remove nsim_dellink() implementationJiri Pirko
Remove nsim_dellink() implementation. The rtnetlink code sets the dellink op to unregister_netdevice_queue(), so this is not needed. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
2019-04-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds
Pull networking fixes from David Miller: 1) Off by one and bounds checking fixes in NFC, from Dan Carpenter. 2) There have been many weird regressions in r8169 since we turned ASPM support on, some are still not understood nor completely resolved. Let's turn this back off for now. From Heiner Kallweit. 3) Signess fixes for ethtool speed value handling, from Michael Zhivich. 4) Handle timestamps properly in macb driver, from Paul Thomas. 5) Two erspan fixes, it's the usual "skb ->data potentially reallocated and we're holding a stale protocol header pointer". From Lorenzo Bianconi. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: bnxt_en: Reset device on RX buffer errors. bnxt_en: Improve RX consumer index validity check. net: macb driver, check for SKBTX_HW_TSTAMP qlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constant broadcom: tg3: fix use of SPEED_UNKNOWN ethtool constant ethtool: avoid signed-unsigned comparison in ethtool_validate_speed() net: ip6_gre: fix possible use-after-free in ip6erspan_rcv net: ip_gre: fix possible use-after-free in erspan_rcv r8169: disable ASPM again MAINTAINERS: ieee802154: update documentation file pattern net: vrf: Fix ping failed when vrf mtu is set to 0 selftests: add a tc matchall test case nfc: nci: Potential off by one in ->pipes[] array NFC: nci: Add some bounds checking in nci_hci_cmd_received()
2019-04-08Merge branch 'fixes-v5.1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull TPM fixes from James Morris: "From Jarkko: These are critical fixes for v5.1. Contains also couple of new selftests for v5.1 features (partial reads in /dev/tpm0)" * 'fixes-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: selftests/tpm2: Open tpm dev in unbuffered mode selftests/tpm2: Extend tests to cover partial reads KEYS: trusted: fix -Wvarags warning tpm: Fix the type of the return value in calc_tpm2_event_size() KEYS: trusted: allow trusted.ko to initialize w/o a TPM tpm: fix an invalid condition in tpm_common_poll tpm: turn on TPM on suspend for TPM 1.x
2019-04-08Merge tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull xtensa fixes from Max Filippov: - fix syscall number passed to trace_sys_exit - fix syscall number initialization in start_thread - fix level interpretation in the return_address - fix format string warning in init_pmd * tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa: xtensa: fix format string warning in init_pmd xtensa: fix return_address xtensa: fix initialization of pt_regs::syscall in start_thread xtensa: use actual syscall number in do_syscall_trace_leave
2019-04-08Merge branch 'bnxt_en-fixes'David S. Miller
Michael Chan says: ==================== bnxt_en: 2 bug fixes. The first patch prevents possible driver crash if we get a bad RX index from the hardware. The second patch resets the device when the hardware reports buffer error to recover from the error. Please queue these for -stable also. Thanks. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08bnxt_en: Reset device on RX buffer errors.Michael Chan
If the RX completion indicates RX buffers errors, the RX ring will be disabled by firmware and no packets will be received on that ring from that point on. Recover by resetting the device. Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08bnxt_en: Improve RX consumer index validity check.Michael Chan
There is logic to check that the RX/TPA consumer index is the expected index to work around a hardware problem. However, the potentially bad consumer index is first used to index into an array to reference an entry. This can potentially crash if the bad consumer index is beyond legal range. Improve the logic to use the consumer index for dereferencing after the validity check and log an error message. Fixes: fa7e28127a5a ("bnxt_en: Add workaround to detect bad opaque in rx completion (part 2)") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08net: macb driver, check for SKBTX_HW_TSTAMPPaul Thomas
Make sure SKBTX_HW_TSTAMP (i.e. SOF_TIMESTAMPING_TX_HARDWARE) has been enabled for this skb. It does fix the issue where normal socks that aren't expecting a timestamp will not wake up on select, but when a user does want a SOF_TIMESTAMPING_TX_HARDWARE it does work. Signed-off-by: Paul Thomas <pthomas8589@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08ethtool: thunder_bgx: use ethtool.h constants for speed and duplexMichael Zhivich
Use constants provided by ethtool.h for speed and duplex values instead of raw integer constants to increase code readability. thunder_bgx already uses SPEED_UNKNOWN and DUPLEX_UNKNOWN constants, also provided by ethtool.h. Signed-off-by: Michael Zhivich <mzhivich@akamai.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08net: phy: fix setting autoneg_complete in genphy_update_linkHeiner Kallweit
The original patch didn't set phydev->autoneg_complete in one exit path. Fix this. Fixes: 4950c2ba49cc ("net: phy: fix autoneg mismatch case in genphy_read_status") Reported-by: Simon Horman <horms@verge.net.au> Tested-by: Simon Horman <horms@verge.net.au> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08Merge branch 'ethtool-fix-use-of-SPEED_UNKNOWN-constant'David S. Miller
Michael Zhivich says: ==================== ethtool: fix use of SPEED_UNKNOWN constant This patch series addresses 2 related issues: 1. ethtool_validate_speed() triggers a "signed-unsigned comparison" warning due to type difference of SPEED_UNKNOWN constant (int) and argument to ethtool_validate_speed (__u32). 2. some drivers use u16 storage for SPEED_UNKNOWN constant, resulting in value truncation and thus failure to test against SPEED_UNKNOWN correctly. This revised series addresses several feedback comments: - split up the patch in to series - do not unnecessarily change drivers that use "int" storage for speed values ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08qlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constantMichael Zhivich
qlcnic driver uses u16 to store SPEED_UKNOWN ethtool constant, which is defined as -1, resulting in value truncation and thus incorrect test results against SPEED_UNKNOWN. For example, the following test will print "False": u16 speed = SPEED_UNKNOWN; if (speed == SPEED_UNKNOWN) printf("True"); else printf("False"); Change storage of speed to use u32 to avoid this issue. Signed-off-by: Michael Zhivich <mzhivich@akamai.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08broadcom: tg3: fix use of SPEED_UNKNOWN ethtool constantMichael Zhivich
tg3 driver uses u16 to store SPEED_UKNOWN ethtool constant, which is defined as -1, resulting in value truncation and thus incorrect test results against SPEED_UNKNOWN. For example, the following test will print "False": u16 speed = SPEED_UNKNOWN; if (speed == SPEED_UNKNOWN) printf("True"); else printf("False"); Change storage of speed to use u32 to avoid this issue. Signed-off-by: Michael Zhivich <mzhivich@akamai.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-08ethtool: avoid signed-unsigned comparison in ethtool_validate_speed()Michael Zhivich
When building C++ userspace code that includes ethtool.h with "-Werror -Wall", g++ complains about signed-unsigned comparison in ethtool_validate_speed() due to definition of SPEED_UNKNOWN as -1. Explicitly cast SPEED_UNKNOWN to __u32 to match type of ethtool_validate_speed() argument. Signed-off-by: Michael Zhivich <mzhivich@akamai.com> Signed-off-by: David S. Miller <davem@davemloft.net>