Why does âpage allocation failureâ occur whereas there are still â58*4096kB (C)â could be used?
Valdis Kl=?utf-8?Q?=c4=93?=tnieks
valdis.kletnieks at vt.edu
Fri Jun 19 03:14:02 EDT 2020
On Fri, 19 Jun 2020 14:56:20 +0800, åä¸é¾ sunshilong said:
> Why doesn't the kernel use two memory blocks whose size is 2048KB(i.e.*oder 9 *)
> instead of one block *order 10 *(you see, there are still three free blocks and
> 2048KB*2=4096KB equivalent to the memory size of order 10)?
Most parts of the kernel, when asking for very high-order allocations, *will*
have a fallback strategy to use smaller chunks. So, for instance, if a device
need a 1M buffer and supports scatter-gather operations, if 1M of contiguous
memory isn't available, the kernel can ask for 4 256K chunks and have the I/O
directed into the 4 areas. However, if the memory *has* to be contiguous (for
example, no scatter/gather available, or it's for an array data structure),
then it can't do that.
And in fact, that fallback could very well have happened in this case - I
didn't bother chasing back to see if the gadget driver does recovery by
allocating multiple smaller chunks.
(That's a good "exercise for the student"... :)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20200619/7242a10d/attachment.sig>
More information about the Kernelnewbies
mailing list