Skip to content

Commit e4205cc

Browse files
author
Rafael Aquini
committed
kernel: be more careful about dup_mmap() failures and uprobe registering
JIRA: https://issues.redhat.com/browse/RHEL-84184 CVE: CVE-2025-21709 Conflicts: * kernel/events/uprobes.c: a notable context difference in the 1st hunk due to RHEL-9 missing the following upstream commits: 87195a1, 2bf8e5a, and dd1a756; and a notable contex difference in the 2nd hunk due to RHEL-9 missing the following upstream commits: 84455e6 and 8617408. None of the aforelisted commits are of any relevance for this backport work. This patch is a backport of the following upstream commit: commit 64c37e1 Author: Liam R. Howlett <Liam.Howlett@Oracle.com> Date: Mon Jan 27 12:02:21 2025 -0500 kernel: be more careful about dup_mmap() failures and uprobe registering If a memory allocation fails during dup_mmap(), the maple tree can be left in an unsafe state for other iterators besides the exit path. All the locks are dropped before the exit_mmap() call (in mm/mmap.c), but the incomplete mm_struct can be reached through (at least) the rmap finding the vmas which have a pointer back to the mm_struct. Up to this point, there have been no issues with being able to find an mm_struct that was only partially initialised. Syzbot was able to make the incomplete mm_struct fail with recent forking changes, so it has been proven unsafe to use the mm_struct that hasn't been initialised, as referenced in the link below. Although 8ac662f ("fork: avoid inappropriate uprobe access to invalid mm") fixed the uprobe access, it does not completely remove the race. This patch sets the MMF_OOM_SKIP to avoid the iteration of the vmas on the oom side (even though this is extremely unlikely to be selected as an oom victim in the race window), and sets MMF_UNSTABLE to avoid other potential users from using a partially initialised mm_struct. When registering vmas for uprobe, skip the vmas in an mm that is marked unstable. Modifying a vma in an unstable mm may cause issues if the mm isn't fully initialised. Link: https://lore.kernel.org/all/6756d273.050a0220.2477f.003d.GAE@google.com/ Link: https://lkml.kernel.org/r/20250127170221.1761366-1-Liam.Howlett@oracle.com Fixes: d240629 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()") Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Peng Zhang <zhangpeng.00@bytedance.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Rafael Aquini <raquini@redhat.com>
1 parent 6abde43 commit e4205cc

File tree

2 files changed

+18
-3
lines changed

2 files changed

+18
-3
lines changed

kernel/events/uprobes.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@
2626
#include <linux/task_work.h>
2727
#include <linux/shmem_fs.h>
2828
#include <linux/khugepaged.h>
29+
#include <linux/oom.h> /* check_stable_address_space */
2930

3031
#include <linux/uprobes.h>
3132

@@ -1048,6 +1049,9 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
10481049
goto free;
10491050

10501051
mmap_write_lock(mm);
1052+
if (check_stable_address_space(mm))
1053+
goto unlock;
1054+
10511055
vma = find_vma(mm, info->vaddr);
10521056
if (!vma || !valid_vma(vma, is_register) ||
10531057
file_inode(vma->vm_file) != uprobe->inode)

kernel/fork.c

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -776,16 +776,27 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
776776
mt_set_in_rcu(vmi.mas.tree);
777777
ksm_fork(mm, oldmm);
778778
khugepaged_fork(mm, oldmm);
779-
} else if (mpnt) {
779+
} else {
780+
780781
/*
781782
* The entire maple tree has already been duplicated. If the
782783
* mmap duplication fails, mark the failure point with
783784
* XA_ZERO_ENTRY. In exit_mmap(), if this marker is encountered,
784785
* stop releasing VMAs that have not been duplicated after this
785786
* point.
786787
*/
787-
mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1);
788-
mas_store(&vmi.mas, XA_ZERO_ENTRY);
788+
if (mpnt) {
789+
mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1);
790+
mas_store(&vmi.mas, XA_ZERO_ENTRY);
791+
/* Avoid OOM iterating a broken tree */
792+
set_bit(MMF_OOM_SKIP, &mm->flags);
793+
}
794+
/*
795+
* The mm_struct is going to exit, but the locks will be dropped
796+
* first. Set the mm_struct as unstable is advisable as it is
797+
* not fully initialised.
798+
*/
799+
set_bit(MMF_UNSTABLE, &mm->flags);
789800
}
790801
out:
791802
mmap_write_unlock(mm);

0 commit comments

Comments
 (0)