question about kref API
Jeff Haran
Jeff.Haran at citrix.com
Tue Jul 22 13:25:03 EDT 2014
> -----Original Message-----
> From: Greg KH [mailto:greg at kroah.com]
> Sent: Monday, July 21, 2014 7:18 PM
> To: Jeff Haran
> Cc: kernelnewbies at kernelnewbies.org
> Subject: Re: question about kref API
>
> On Tue, Jul 22, 2014 at 12:27:20AM +0000, Jeff Haran wrote:
> > Hi,
> >
> > I've been reading Documentation/kref.txt in order to understand the
> > usage of this API. There is part of this documentation that I am
> > having difficulty understanding and was hoping somebody on this list
> > could clarify this. One of my assumptions in reading this is that the
> > expected usage of this API is that for a given object embedding a
> > kref, once the object has been initialized the number of calls to
> > "put" a given instance of an object should never exceed the number of
> > calls to "get" that same instance.
>
> If it does, the object will be cleaned up and deleted from the system,
> so you no longer have a valid pointer.
>
> > Maybe that's the root of my misunderstanding, but my assumption is
> > that calls to kref_get() and kref_put() are expected to be made in
> > pairs for a given instance of struct kref. If this is wrong, please
> > let me know.
>
> "pairs" in that the same number must be called in order for things to
> work properly.
>
> > Kref.txt includes some sample code that discusses using a mutex to
> > serialize the execution of its get_entry() and put_entry() functions:
> >
> > 146 static DEFINE_MUTEX(mutex);
> > 147 static LIST_HEAD(q);
> > 148 struct my_data
> > 149 {
> > 150 struct kref refcount;
> > 151 struct list_head link;
> > 152 };
> > 153
> > 154 static struct my_data *get_entry()
> > 155 {
> > 156 struct my_data *entry = NULL;
> > 157 mutex_lock(&mutex);
> > 158 if (!list_empty(&q)) {
> > 159 entry = container_of(q.next, struct my_data, link);
> > 160 kref_get(&entry->refcount);
> > 161 }
> > 162 mutex_unlock(&mutex);
> > 163 return entry;
> > 164 }
> > 165
> > 166 static void release_entry(struct kref *ref)
> > 167 {
> > 168 struct my_data *entry = container_of(ref, struct my_data, refcount);
> > 169
> > 170 list_del(&entry->link);
> > 171 kfree(entry);
> > 172 }
> > 173
> > 174 static void put_entry(struct my_data *entry)
> > 175 {
> > 176 mutex_lock(&mutex);
> > 177 kref_put(&entry->refcount, release_entry);
> > 178 mutex_unlock(&mutex);
> > 179 }
> >
> > The sample code does not show the creation of the link list headed by
> > q,
>
> That is done there in the static initializer.
You are referring to line 147 here, right? That creates an empty list if I am following the code correctly. What I meant was that the sample code doesn't show how instances of struct my_data get initialized and inserted into the list at q. Makes sense to leave that out for brevity I suppose and you've made it clear below that for this to work right a call to kref_init() must have been made on the refcount fields of any struct my_data instances that got put into the list headed by q. Thanks.
> > so it is unclear to me what the value of the reference counts in
> > these my_data structures are at the time they are enqueued to the
> > list. Put another way, it's not clear to me from reading this whether
> > kref_init(&(entry->refcount)) was called before the instances of
> > struct my_data were put into the list.
>
> Yes it was.
>
> > So I have two interpretations of what is being illustrated with this
> > sample code and neither makes much sense to me.
> >
> > 1) The krefs are initialized before they go into the list.
>
> Yes.
>
> > If the krefs in the instances of struct my_data are initialized by
> > kref_init() before they go into the list and thus start off with a 1,
> > then it would seem to me that the mutex would not be necessary so long
> > as the number of calls to put_entry() never exceeds the number of
> > calls to get_entry().
>
> The mutex is needed as multiple threads could be calling kref_put at the
> same time if you don't have that.
>
> Best thing is, don't use the "raw" kref_put() calls, use the
> kref_put_mutex() or kref_put_spinlock_irqsave() call instead. It's much
> easier that way and arguably, I should have done that when I created the
> API over a decade ago.
>
At this point it's rule (3) that I am still struggling with a bit:
50 3) If the code attempts to gain a reference to a kref-ed structure
51 without already holding a valid pointer, it must serialize access
52 where a kref_put() cannot occur during the kref_get(), and the
53 structure must remain valid during the kref_get().
In this example, every call to put_entry() results in locking and unlocking the mutex. But if I am following this right, that is only because the entry at the head of the list is removed from the list when and only when the last reference to it is released. If the list_del() happened for some other cause (say a timer expired or user space sent a message to delete the entry), then the taking of the mutex in put_entry() wouldn't be necessary, right?
For instance, this would be valid, wouldn't it?
static void release_entry(struct kref *ref)
{
struct my_data *entry = container_of(ref, struct my_data, refcount);
kfree(entry);
}
static void put_entry(struct my_data *entry)
{
kref_put(&entry->refcount, release_entry);
}
static void del_entry(struct my_data *entry)
{
mutex_lock(&mutex);
list_del(&entry->link);
mutex_unlock(&mutex);
put_entry(entry);
}
In this example, threads that want access to an entry do need to take the mutex in order to get it from the list in get_entry(), but when they are done with it but don't want it deleted from the list they can call put_entry() and no mutex need be taken. The only time the mutex need be taken is when the caller wants to also delete the entry from the list, which is what del_entry() is for.
Put another way, the mutex is really there to serialize access to the list, right?
Or perhaps I am still not understanding rule (3). My apologies if I am coming across as thick here. Just trying to make sure I understand the implications of rule (3).
Thanks again,
Jeff Haran
More information about the Kernelnewbies
mailing list