-
Notifications
You must be signed in to change notification settings - Fork 444
Comparing changes
Open a pull request
base repository: libbpf/libbpf
base: master
head repository: libbpf/libbpf
compare: libbpf-1.3.4
- 15 commits
- 4 files changed
- 6 contributors
Commits on Jul 10, 2024
-
libbpf: fix BPF skeleton forward/backward compat handling
BPF skeleton was designed from day one to be extensible. Generated BPF skeleton code specifies actual sizes of map/prog/variable skeletons for that reason and libbpf is supposed to work with newer/older versions correctly. Unfortunately, it was missed that we implicitly embed hard-coded most up-to-date (according to libbpf's version of libbpf.h header used to compile BPF skeleton header) sizes of those structs, which can differ from the actual sizes at runtime when libbpf is used as a shared library. We have a few places were we just index array of maps/progs/vars, which implicitly uses these potentially invalid sizes of structs. This patch aims to fix this problem going forward. Once this lands, we'll backport these changes in Github repo to create patched releases for older libbpfs. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Fixes: 430025e5dca5 ("libbpf: Add subskeleton scaffolding") Fixes: 08ac454e258e ("libbpf: Auto-attach struct_ops BPF maps in BPF skeleton") Co-developed-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240708204540.4188946-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit bf7ddbe)
Configuration menu - View commit details
-
Copy full SHA for a71b784 - Browse repository at this point
Copy the full SHA a71b784View commit details -
libbpf: bump version to v1.3.1
Bump patch version to prepare for v1.3.1 Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Configuration menu - View commit details
-
Copy full SHA for 34bbba3 - Browse repository at this point
Copy the full SHA 34bbba3View commit details
Commits on Jul 11, 2024
-
libbpf: detect broken PID filtering logic for multi-uprobe
Libbpf is automatically (and transparently to user) detecting multi-uprobe support in the kernel, and, if supported, uses multi-uprobes to improve USDT attachment speed. USDTs can be attached system-wide or for the specific process by PID. In the latter case, we rely on correct kernel logic of not triggering USDT for unrelated processes. As such, on older kernels that do support multi-uprobes, but still have broken PID filtering logic, we need to fall back to singular uprobes. Unfortunately, whether user is using PID filtering or not is known at the attachment time, which happens after relevant BPF programs were loaded into the kernel. Also unfortunately, we need to make a call whether to use multi-uprobes or singular uprobe for SEC("usdt") programs during BPF object load time, at which point we have no information about possible PID filtering. The distinction between single and multi-uprobes is small, but important for the kernel. Multi-uprobes get BPF_TRACE_UPROBE_MULTI attach type, and kernel internally substitiute different implementation of some of BPF helpers (e.g., bpf_get_attach_cookie()) depending on whether uprobe is multi or singular. So, multi-uprobes and singular uprobes cannot be intermixed. All the above implies that we have to make an early and conservative call about the use of multi-uprobes. And so this patch modifies libbpf's existing feature detector for multi-uprobe support to also check correct PID filtering. If PID filtering is not yet fixed, we fall back to singular uprobes for USDTs. This extension to feature detection is simple thanks to kernel's -EINVAL addition for pid < 0. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240521163401.3005045-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit d9f9fd5)
Configuration menu - View commit details
-
Copy full SHA for bcc49c4 - Browse repository at this point
Copy the full SHA bcc49c4View commit details -
libbpf: bump version to v1.3.2
Bump patch version to prepare for v1.3.2 Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Configuration menu - View commit details
-
Copy full SHA for 821aa78 - Browse repository at this point
Copy the full SHA 821aa78View commit details
Commits on Sep 3, 2024
-
libbpf: Fix bpf_object__open_skeleton()'s mishandling of options
We do an ugly copying of options in bpf_object__open_skeleton() just to be able to set object name from skeleton's recorded name (while still allowing user to override it through opts->object_name). This is not just ugly, but it also is broken due to memcpy() that doesn't take into account potential skel_opts' and user-provided opts' sizes differences due to backward and forward compatibility. This leads to copying over extra bytes and then failing to validate options properly. It could, technically, lead also to SIGSEGV, if we are unlucky. So just get rid of that memory copy completely and instead pass default object name into bpf_object_open() directly, simplifying all this significantly. The rule now is that obj_name should be non-NULL for bpf_object_open() when called with in-memory buffer, so validate that explicitly as well. We adopt bpf_object__open_mem() to this as well and generate default name (based on buffer memory address and size) outside of bpf_object_open(). Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Reported-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Daniel Müller <deso@posteo.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20240827203721.1145494-1-andrii@kernel.org (cherry picked from commit f6f2402)
Configuration menu - View commit details
-
Copy full SHA for 00db402 - Browse repository at this point
Copy the full SHA 00db402View commit details -
libbpf: Ensure new BTF objects inherit input endianness
New split BTF needs to preserve base's endianness. Similarly, when creating a distilled BTF, we need to preserve original endianness. Fix by updating libbpf's btf__distill_base() and btf_new_empty() to retain the byte order of any source BTF objects when creating new ones. Fixes: ba451366bf44 ("libbpf: Implement basic split BTF support") Fixes: 58e185a0dc35 ("libbpf: Add btf__distill_base() creating split BTF with distilled base BTF") Reported-by: Song Liu <song@kernel.org> Reported-by: Eduard Zingerman <eddyz87@gmail.com> Suggested-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Tony Ambardar <tony.ambardar@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/6358db36c5f68b07873a0a5be2d062b1af5ea5f8.camel@gmail.com/ Link: https://lore.kernel.org/bpf/20240830095150.278881-1-tony.ambardar@gmail.com (cherry picked from commit fe28fae)
Configuration menu - View commit details
-
Copy full SHA for 169057f - Browse repository at this point
Copy the full SHA 169057fView commit details -
libbpf: bump version to v1.3.3
Bump patch version to prepare for v1.3.3 Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Configuration menu - View commit details
-
Copy full SHA for c394219 - Browse repository at this point
Copy the full SHA c394219View commit details
Commits on Oct 29, 2024
-
libbpf: Skip base btf sanity checks
When upgrading to libbpf 1.3 we noticed a big performance hit while loading programs using CORE on non base-BTF symbols. This was tracked down to the new BTF sanity check logic. The issue is the base BTF definitions are checked first for the base BTF and then again for every module BTF. Loading 5 dummy programs (using libbpf-rs) that are using CORE on a non-base BTF symbol on my system: - Before this fix: 3s. - With this fix: 0.1s. Fix this by only checking the types starting at the BTF start id. This should ensure the base BTF is still checked as expected but only once (btf->start_id == 1 when creating the base BTF), and then only additional types are checked for each module BTF. Fixes: 3903802bb99a ("libbpf: Add basic BTF sanity validation") Signed-off-by: Antoine Tenart <atenart@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20240624090908.171231-1-atenart@kernel.org (cherry picked from commit 95c63a0)
Configuration menu - View commit details
-
Copy full SHA for 403f390 - Browse repository at this point
Copy the full SHA 403f390View commit details -
libbpf: move global data mmap()'ing into bpf_object__load()
Since BPF skeleton inception libbpf has been doing mmap()'ing of global data ARRAY maps in bpf_object__load_skeleton() API, which is used by code generated .skel.h files (i.e., by BPF skeletons only). This is wrong because if BPF object is loaded through generic bpf_object__load() API, global data maps won't be re-mmap()'ed after load step, and memory pointers returned from bpf_map__initial_value() would be wrong and won't reflect the actual memory shared between BPF program and user space. bpf_map__initial_value() return result is rarely used after load, so this went unnoticed for a really long time, until bpftrace project attempted to load BPF object through generic bpf_object__load() API and then used BPF subskeleton instantiated from such bpf_object. It turned out that .data/.rodata/.bss data updates through such subskeleton was "blackholed", all because libbpf wouldn't re-mmap() those maps during bpf_object__load() phase. Long story short, this step should be done by libbpf regardless of BPF skeleton usage, right after BPF map is created in the kernel. This patch moves this functionality into bpf_object__populate_internal_map() to achieve this. And bpf_object__load_skeleton() is now simple and almost trivial, only propagating these mmap()'ed pointers into user-supplied skeleton structs. We also do trivial adjustments to error reporting inside bpf_object__populate_internal_map() for consistency with the rest of libbpf's map-handling code. Reported-by: Alastair Robertson <ajor@meta.com> Reported-by: Jonathan Wiepert <jwiepert@meta.com> Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241023043908.3834423-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit 2dea4b8)
Configuration menu - View commit details
-
Copy full SHA for 15fd61f - Browse repository at this point
Copy the full SHA 15fd61fView commit details -
libbpf: never interpret subprogs in .text as entry programs
Libbpf pre-1.0 had a legacy logic of allowing singular non-annotated (i.e., not having explicit SEC() annotation) function to be treated as sole entry BPF program (unless there were other explicit entry programs). This behavior was dropped during libbpf 1.0 transition period (unless LIBBPF_STRICT_SEC_NAME flag was unset in libbpf_mode). When 1.0 was released and all the legacy behavior was removed, the bug slipped through leaving this legacy behavior around. Fix this for good, as it actually causes very confusing behavior if BPF object file only has subprograms, but no entry programs. Fixes: bd054102a8c7 ("libbpf: enforce strict libbpf 1.0 behaviors") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241010211731.4121837-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit ac9ced9)
Configuration menu - View commit details
-
Copy full SHA for 948ca44 - Browse repository at this point
Copy the full SHA 948ca44View commit details -
libbpf: fix sym_is_subprog() logic for weak global subprogs
sym_is_subprog() is incorrectly rejecting relocations against *weak* global subprogs. Fix that by realizing that STB_WEAK is also a global function. While it seems like verifier doesn't support taking an address of non-static subprog right now, it's still best to fix support for it on libbpf side, otherwise users will get a very confusing error during BPF skeleton gene 10000 ration or static linking due to misinterpreted relocation: libbpf: prog 'handle_tp': bad map relo against 'foo' in section '.text' Error: failed to open BPF object file: Relocation failed It's clearly not a map relocation, but is treated and reported as such without this fix. Fixes: 53eddb5e04ac ("libbpf: Support subprog address relocation") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241009011554.880168-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit 0e39713)
Configuration menu - View commit details
-
Copy full SHA for 5623de3 - Browse repository at this point
Copy the full SHA 5623de3View commit details -
libbpf: Do not resolve size on duplicate FUNCs
FUNCs do not have sizes, thus currently btf__resolve_size will fail with -EINVAL. Add conditions so that we only update size when the BTF object is not function or function prototype. Signed-off-by: Eric Long <i@hack3r.moe> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241002-libbpf-dup-extern-funcs-v4-1-560eb460ff90@hack3r.moe (cherry picked from commit ecf998e)
Configuration menu - View commit details
-
Copy full SHA for b646f0b - Browse repository at this point
Copy the full SHA b646f0bView commit details -
libbpf: Fix uretprobe.multi.s programs auto attachment
As reported by Andrii we don't currently recognize uretprobe.multi.s programs as return probes due to using (wrong) strcmp function. Using str_has_pfx() instead to match uretprobe.multi prefix. Tests are passing, because the return program was executed as entry program and all counts were incremented properly. Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240910125336.3056271-1-jolsa@kernel.org (cherry picked from commit 6967130)
Configuration menu - View commit details
-
Copy full SHA for 72114b9 - Browse repository at this point
Copy the full SHA 72114b9View commit details -
libbpf: Fix expected_attach_type set handling in program load callback
Referenced commit broke the logic of resetting expected_attach_type to zero for allowed program types if kernel doesn't yet support such field. We do need to overwrite and preserve expected_attach_type for multi-uprobe though, but that can be done explicitly in libbpf_prepare_prog_load(). Fixes: 5902da6d8a52 ("libbpf: Add uprobe multi link support to bpf_program__attach_usdt") Suggested-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Tao Chen <chen.dylane@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240925153012.212866-1-chen.dylane@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> (cherry picked from commit ad633fb)
Configuration menu - View commit details
-
Copy full SHA for 77a47ca - Browse repository at this point
Copy the full SHA 77a47caView commit details -
libbpf: bump version to v1.3.4
New patch version bump for v1.3.4. Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Configuration menu - View commit details
-
Copy full SHA for bf3ffb6 - Browse repository at this point
Copy the full SHA bf3ffb6View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master...libbpf-1.3.4