diff options
author | Eric W. Biederman <ebiederm@xmission.com> | 2013-10-05 19:26:05 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2013-10-07 15:23:14 -0400 |
commit | 5cde282938915f36a2e6769b51c24c4159654859 (patch) | |
tree | 9e364e2988fb3556313eda68eea6fb5655b6df1e /net/sched | |
parent | d639feaaf3f40cd90b75a2fec5b7d5c3f96c2c88 (diff) | |
download | linux-5cde282938915f36a2e6769b51c24c4159654859.tar.gz linux-5cde282938915f36a2e6769b51c24c4159654859.tar.bz2 linux-5cde282938915f36a2e6769b51c24c4159654859.zip |
net: Separate the close_list and the unreg_list v2
Separate the unreg_list and the close_list in dev_close_many preventing
dev_close_many from permuting the unreg_list. The permutations of the
unreg_list have resulted in cases where the loopback device is accessed
it has been freed in code such as dst_ifdown. Resulting in subtle memory
corruption.
This is the second bug from sharing the storage between the close_list
and the unreg_list. The issues that crop up with sharing are
apparently too subtle to show up in normal testing or usage, so let's
forget about being clever and use two separate lists.
v2: Make all callers pass in a close_list to dev_close_many
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r-- | net/sched/sch_generic.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index e7121d29c4bd..7fc899a943a8 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -829,7 +829,7 @@ void dev_deactivate_many(struct list_head *head) struct net_device *dev; bool sync_needed = false; - list_for_each_entry(dev, head, unreg_list) { + list_for_each_entry(dev, head, close_list) { netdev_for_each_tx_queue(dev, dev_deactivate_queue, &noop_qdisc); if (dev_ingress_queue(dev)) @@ -848,7 +848,7 @@ void dev_deactivate_many(struct list_head *head) synchronize_net(); /* Wait for outstanding qdisc_run calls. */ - list_for_each_entry(dev, head, unreg_list) + list_for_each_entry(dev, head, close_list) while (some_qdisc_is_busy(dev)) yield(); } @@ -857,7 +857,7 @@ void dev_deactivate(struct net_device *dev) { LIST_HEAD(single); - list_add(&dev->unreg_list, &single); + list_add(&dev->close_list, &single); dev_deactivate_many(&single); list_del(&single); } |