Why built-in modules slow down kernel boot?
Greg KH
greg at kroah.com
Tue Sep 30 14:28:05 EDT 2014
On Tue, Sep 30, 2014 at 12:27:12PM +0200, Michele Curti wrote:
> Hi all,
> it's just a curiosity.
>
> Since the use of an initramfs doubles the kernel boot time I decided to play a
> little compiling as built-in the modules required to mount root (starting
> from a localmodconfig).
>
> Everything ok, the system starts and the kernel boot time is good
> Startup finished in 1.749s (firmware) + 375ms (loader) +
> 1.402s (kernel) + 716ms (userspace) = 4.244s
> (from systemd-analyze).
>
> My next idea was: "Well, why not to make all modules as built-in? So I avoid
> reading from disk at every module load.. and all of them are loaded
> anyway", but the results was opposite to my expectations, kernel boot time
> increased from 1.4 to 3 seconds.
>
> So my question is, how this can be explained?
>
> My theory is that by compiling all the modules as built-in, the kernel calls
> all the module __init functions in a sequential manner, (using a single
> core?) and lets the userspace start only when everything is done.
Yes, that is correct. And some of those init functions do lots of "odd"
things, thinking that the hardware for those drivers really is present,
so they can take a while to figure out that they shouldn't be running at
all.
Also, a larger kernel takes longer to read off of the disk and load into
memory, although with a ssd, it shouldn't be noticable.
greg k-h
More information about the Kernelnewbies
mailing list