Skip to content
Snippets Groups Projects
Commit 0938b161 authored by Pankaj Raghav's avatar Pankaj Raghav Committed by Andrew Morton
Browse files

mm: don't set readahead flag on a folio when lookahead_size > nr_to_read

The readahead flag is set on a folio based on the lookahead_size and
nr_to_read.  For example, when the readahead happens from index to index +
nr_to_read, then the readahead `mark` offset from index is set at
nr_to_read - lookahead_size.

There are some scenarios where the lookahead_size > nr_to_read.  For
example, readahead window was created, but the file was truncated before
the readahead starts.  do_page_cache_ra() will clamp the nr_to_read if the
readahead window extends beyond EOF after truncation.  If this happens,
readahead flag should not be set on any folio on the current readahead
window.

The current calculation for `mark` with mapping_min_order > 0 gives
incorrect results when lookahead_size > nr_to_read due to rounding up
operation:

index = 128
nr_to_read = 16
lookahead_size = 28
mapping_min_order = 4 (16 pages)

ra_folio_index = round_up(128 + 16 - 28, 16) = 128;
mark = 128 - 128 = 0; # offset from index to set RA flag

In the above example, the lookahead_size is actually lying outside the
current readahead window.  Without this patch, RA flag will be set
incorrectly on the folio at index 128.  This can lead to marking the
readahead flag on the wrong folio, therefore, triggering a readahead when
it is not necessary.

Explicitly initialize `mark` to be ULONG_MAX and only calculate it when
lookahead_size is within the readahead window.

Link: https://lkml.kernel.org/r/20241017062342.478973-1-kernel@pankajraghav.com


Fixes: 26cfdb39 ("readahead: allocate folios with mapping_min_order in readahead")
Signed-off-by: default avatarPankaj Raghav <p.raghav@samsung.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 4a9a27fd
No related branches found
No related tags found
No related merge requests found
......@@ -206,9 +206,9 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
unsigned long nr_to_read, unsigned long lookahead_size)
{
struct address_space *mapping = ractl->mapping;
unsigned long ra_folio_index, index = readahead_index(ractl);
unsigned long index = readahead_index(ractl);
gfp_t gfp_mask = readahead_gfp_mask(mapping);
unsigned long mark, i = 0;
unsigned long mark = ULONG_MAX, i = 0;
unsigned int min_nrpages = mapping_min_folio_nrpages(mapping);
/*
......@@ -232,9 +232,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* index that only has lookahead or "async_region" to set the
* readahead flag.
*/
ra_folio_index = round_up(readahead_index(ractl) + nr_to_read - lookahead_size,
min_nrpages);
mark = ra_folio_index - index;
if (lookahead_size <= nr_to_read) {
unsigned long ra_folio_index;
ra_folio_index = round_up(readahead_index(ractl) +
nr_to_read - lookahead_size,
min_nrpages);
mark = ra_folio_index - index;
}
nr_to_read += readahead_index(ractl) - index;
ractl->_index = index;
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment