Skip to content

Conversation

@ZzzMao
Copy link

@ZzzMao ZzzMao commented Nov 8, 2023

Set pfe_per_function default to false.

Suraj Jitindar Singh and others added 19 commits July 28, 2023 16:32
The "has_function_profiling" support field in the symbol struct is used to show
that a function symbol is able to be patched. This is necessary to check that
functions which need to be patched are able to be.

On arm64 this means the presence of 2 NOP instructions at function entry
which are patched by ftrace to call the ftrace handling code. These 2 NOPs
are inserted by the compiler and the location of them is recorded in a
section called "__patchable_function_entries". Check whether a symbol has a
corresponding entry in the "__patchable_function_entries" section and if so
mark it as "has_func_profiling".

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Make error message standard across architectures when no patchable entry
 - Don't store __patchable_function_entries section in
   kpatch_find_func_profiling_calls(), instead find it each time
…d populate

The function kpatch_create_mcount_sections() allocates the __mcount_loc section
and then populates it with functions which have a patchable entry.

The following patch will add aarch64 support to this function where the
allocation will have to be done before the kelf_patched is torn down.
Thus split this function so that the allocation can be performed earlier and
the populating as before.

No intended functional change.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Add patch to series
…arch64

The __mcount_loc section contains the addresses of patchable ftrace sites which
is used by the ftrace infrastructure in the kernel to create a list of tracable
functions and to know where to patch to enable tracing of them. On aarch64 this
section is called __patchable_function_entries and is generated by the
compiler. Either of __mcount_loc or __patchable_function_entries is recognised
by the kernel but for aarch64 use __patchable_function_entries as it is what
is expected.

Add aarch64 support to kpatch_alloc_mcount_sections(). The SHF_LINK_ORDER
section flag must be copied to ensure that it matches to avoid the following:

ld: __patchable_function_entries has both ordered [...] and unordered [...] sections

Add aarch64 support to kpatch_populate_mcount_sections(). Check for the 2
required NOP instructions on function entry, which may be preceded by a BTI C
instruction depending on whether the function is a leaf function. This
determines the offset of the patch site.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Don't preserve the __patchable_function_entries section from the patched elf
   as this is already verified by kpatch_check_func_profiling_calls()
 - Instead get the patch entry offset by checking for a preceding BTI C instr
 - Copy the section flags for __patchable_function_entries

---

rebased, added sh_link fix from Suraj's later commit
  "kpatch-build: Enable ARM64 support"

Signed-off-by: Pete Swain <swine@google.com>
Add the final support required for aarch64 and enable building on that arch.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Add # shellcheck disable=SC2086
 - Add comment to kpatch_is_mapping_symbol()
On aarch64, only the ASSERT_RTNL macro is affected by source line number
changes (WARN, BUG, etc. no longer embed line numbers in the instruction
stream.)  A small test function that invokes the macro for a line change
from 42 to 43:

 0000000000000000 <test_assert_rtnl>:
    0:	d503245f 	bti	c
    4:	d503201f 	nop
    8:	d503201f 	nop
    c:	d503233f 	paciasp
   10:	a9bf7bfd 	stp	x29, x30, [sp, #-16]!
   14:	910003fd 	mov	x29, sp
   18:	94000000 	bl	0 <rtnl_is_locked>
 			18: R_AARCH64_CALL26	rtnl_is_locked
   1c:	34000080 	cbz	w0, 2c <test_assert_rtnl+0x2c>
   20:	a8c17bfd 	ldp	x29, x30, [sp], dynup#16
   24:	d50323bf 	autiasp
   28:	d65f03c0 	ret
   2c:	90000000 	adrp	x0, 0 <test_assert_rtnl>
 			2c: R_AARCH64_ADR_PREL_PG_HI21	.data.once
   30:	39400001 	ldrb	w1, [x0]
 			30: R_AARCH64_LDST8_ABS_LO12_NC	.data.once
   34:	35ffff61 	cbnz	w1, 20 <test_assert_rtnl+0x20>
   38:	52800022 	mov	w2, #0x1                   	// swine#1
   3c:	90000001 	adrp	x1, 0 <test_assert_rtnl>
 			3c: R_AARCH64_ADR_PREL_PG_HI21	.rodata.str1.8+0x8
   40:	39000002 	strb	w2, [x0]
 			40: R_AARCH64_LDST8_ABS_LO12_NC	.data.once
   44:	91000021 	add	x1, x1, #0x0
 			44: R_AARCH64_ADD_ABS_LO12_NC	.rodata.str1.8+0x8
-  48:	52800542 	mov	w2, #0x2a                  	// dynup#42
+  48:	52800562 	mov	w2, #0x2b                  	// dynup#43
   4c:	90000000 	adrp	x0, 0 <test_assert_rtnl>
 			4c: R_AARCH64_ADR_PREL_PG_HI21	.rodata.str1.8+0x20
   50:	91000000 	add	x0, x0, #0x0
 			50: R_AARCH64_ADD_ABS_LO12_NC	.rodata.str1.8+0x20
   54:	94000000 	bl	0 <__warn_printk>
 			54: R_AARCH64_CALL26	__warn_printk
   58:	d4210000 	brk	#0x800
   5c:	17fffff1 	b	20 <test_assert_rtnl+0x20>

Create an implementation of kpatch_line_macro_change_only() for aarch64
modeled after the other architectures.  Only look for relocations to
__warn_printk that ASSERT_RTNL invokes.

Based-on-s390x-code-by: C. Erastus Toe <ctoe@redhat.com>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Update the kpatch-unit-test-objs submodule reference to add aarch64 unit
tests.

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
It seems mapping symbols in aarch64 elf has section size of 0.
So, exclude it in section symbol replacing code just like
kpatch_correlate_symbols().
This fixes the data-read-mostly unit test on aarch64.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
Copy from kernel source tree.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
new clang toolchain on arm64 produces individual __patchable_function_entries
sections for each patchable func, in -ffunction-sections mode,
rather than traditional single __mcount_loc section.

Bend the existing logic to detect this multiplicity in the incoming
kelf objects, and allocate N identical one-entry sections.
These are retrieved as needed by a new function: find_nth_section_by_name()
and attached to the .text sections they describe.

These __pfe section are not actually arm64-specific, but a generic
enhancement across gcc & clang, to allow better garbage collection of
unreferenced object sections, and mcount/pfe objects which refer to them.
The __pfe sections are combined in kernel-or-module final link,
from 5.19.9's 9440155ccb948f8e3ce5308907a2e7378799be60.
From clang-11, __pfe is supported for x86, though not yet used by kernel

The split between allocate/populate phases here is necessary to
enumerate/populate the outgoing section-headers before beginning to
produce output sections

Also adds some missing \n to log_debug()s

Signed-off-by: Pete Swain <swine@google.com>
On arm64, kpatch_find_func_profiling_calls() was skipping
leaf functions, with no relocations, so they weren't patchable.

Here other archs need to walk a function's reloc entries to
check for __fentry__ or __mcount, so it's valid to skip over
functions without sym->sec->rela, because they cannot be patchable,
else they would have at least an __fentry__ call relocation.

But arm64 marks functions patchable in a different way, with per-func
__patchable_function_entries sections referring _to_ the func,
not relocations _within_ the func, so a function w/o relocations
for text or data can still be patchable.

Move the sym->sec->rela check to the per-arch paths.

This allows gcc-static-local-var-5.patch
to generate livepatch, on arm64 & x86

Suggested-By: Bill Wendling <morbo@google.com>
Signed-off-by: Pete Swain <swine@google.com>
Signed-off-by: Pete Swain <swine@google.com>
New toolchain/arch, new conventions for section/label/etc names

gcc's .LCx symbols point to string literals in '.rodata.<func>.str1.*'
sections. Clang creates similar .Ltmp%d symbols in '.rodata.str'

The function is_string_literal_section()
generalized (too much?) to match either
- clang's/arm64 /^\.rodata\.str$/
- gcc's /^\.rodata\./ && /\.str1\./

Various matchers for .data.unlikely .bss.unlikely
replaced by is_data_unlikely_section()
generalized to match
- gcc's ".data.unlikely"
- clang's ".(data|bss).module_name.unlikely"

.data.once handled similarly

Signed-off-by: Pete Swain <swine@google.com>
merged in joe-lawrence/ppc64le-remove-eh_frame-take2
this should resolve github's test failures
Generalized kpatch_line_macro_change_only() & insn_is_load_immediate()
to collapse the aarch64 support back into parent.

I'm assuming the 3rd start1 of the original
	/* Verify mov w2 <line number> */
	if (((start1[offset] & 0b11111) != 0x2) || (start1[offset+3] != 0x52) ||
	    ((start1[offset] & 0b11111) != 0x2) || (start2[offset+3] != 0x52))
was a typo for start2.

That's now absorbed into insn_is_load_immediate() leaving just one
aarch64-specific piece: thinning out the match-list for diagnosing a
__LINE__ reference, to just "__warn_printf".
If CONFIG_UBSAN is enabled, ubsan section (.data..Lubsan_{data,type})
can be created. Keep them unconditionally.

NOTE: This patch needs to be verified.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
Initialize add_off earlier, so it's obviously never used uninitialized.
Clang was warning on this, even if gcc was not.
No functional change, the only path which left it undefined would call
ERROR() anyway.
In ARM64, every function section should have its own pfe section.

It is a bug in GCC 11/12 which will only generate a single pfe
section for all functions. The bug has been fixed in GCC 13.1.

As the create-diff-object is generating the pfe sections on its own,
we should also fix this bug, instead of try to repeat the bug.

--
Adjusted whitespace in Zimao's proposed code.

Signed-off-by: Pete Swain <swine@google.com>
Set pfe_per_function default to false.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants