summaryrefslogtreecommitdiffstats
path: root/net/sched
Commit message (Collapse)AuthorAgeFilesLines
* [NET]: kfree cleanupJesper Juhl2005-11-086-16/+9
| | | | | | | | | | | | | | | From: Jesper Juhl <jesper.juhl@gmail.com> This is the net/ part of the big kfree cleanup patch. Remove pointless checks for NULL prior to calling kfree() in net/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Arnaldo Carvalho de Melo <acme@conectiva.com.br> Acked-by: Marcel Holtmann <marcel@holtmann.org> Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
* [PKT_SCHED]: Correctly handle empty ematch treesThomas Graf2005-11-081-0/+5
| | | | | | | | Fixes an invalid memory reference when the basic classifier is used without any ematches but just actions. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'red' of 84.73.165.173:/home/tgr/repos/net-2.6Arnaldo Carvalho de Melo2005-11-052-741/+518
|\
| * [PKT_SCHED]: (G)RED: Introduce hard droppingThomas Graf2005-11-052-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduces a new flag TC_RED_HARDDROP which specifies that if ECN marking is enabled packets should still be dropped once the average queue length exceeds the maximum threshold. This _may_ help to avoid global synchronisation during small bursts of peers advertising but not caring about ECN. Use this option very carefully, it does more harm than good if (qth_max - qth_min) does not cover at least two average burst cycles. The difference to the current behaviour, in which we'd run into the hard queue limit, is that due to the low pass filter of RED short bursts are less likely to cause a global synchronisation. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Support ECN markingThomas Graf2005-11-051-4/+21
| | | | | | | | | | | | | | | | | | Adds a new u8 flags in a unused padding area of the netlink message. Adds ECN marking support to be used instead of dropping packets immediately. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Fix restart of idle period in WRED mode upon dequeue and dropThomas Graf2005-11-051-2/+2
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Cleanup and remove unnecessary codeThomas Graf2005-11-051-69/+31
| | | | | | | | | | | | | | | | Removes unnecessary includes, initializers, and simplifies the code a bit. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Remove auto-creation of default VQThomas Graf2005-11-051-9/+0
| | | | | | | | | | | | | | | | | | | | | | Since we are no longer depending on the default VQ to be always allocated we can leave it up to the user to actually create it. This gives the user the ability to leave it out on purpose and enqueue packets directly to the device without applying the RED algorithm. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Dont abuse default VQ for equalizingThomas Graf2005-11-051-17/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduces a new red parameter set for use in equalize mode, although only the qavg variable and the idle period marker are being used for now this makes it possible to allow a separate parameter set to be used for equalize later on. The use of this separate parameter set fixes a bogus start of an idle period in gred_drop() which did start an idle period on the default VQ even if equalize mode was disabled. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Remove initd flagThomas Graf2005-11-051-14/+1
| | | | | | | | | | | | | | | | The case when the default VQ is not set up yet is already handled in a less error prone way. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Improve error handling and messagesThomas Graf2005-11-051-24/+44
| | | | | | | | | | | | | | | | | | | | | | | | Try to enqueue packets if we cannot associate it with a VQ, this basically means that the default VQ has not been set up yet. We must check if the VQ still exists while requeueing, the VQ might have been changed between dequeue and the requeue of the underlying qdisc. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Introduce tc_index_to_dp()Thomas Graf2005-11-051-9/+18
| | | | | | | | | | | | | | | | Adds a transformation function returning the DP index for a given skb according to its tc_index. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Use generic queue management interfaceThomas Graf2005-11-051-23/+9
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Report congestion related drops as NET_XMIT_CNThomas Graf2005-11-051-2/+7
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Do not reset statistics in gred_reset/gred_changeThomas Graf2005-11-051-9/+0
| | | | | | | | | | | | | | | | | | | | Qdiscs are not supposed to reset statistics in reset() and while changing parameters. My argumentation is that if the user wants the counters to be reset he can simply remove and readd the qdiscs, that's what most users do anyway. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Use new generic red interfaceThomas Graf2005-11-051-133/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplifies code a lot by separating the red algorithm and the queueing logic. We now differentiate between probability marks and forced marks but sum them together again to not break backwards compatibility. This brings GRED back to the level of RED and improves the accuracy of the averge queue length calculations when stab suggests a zero shift. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Use central VQ change procedureThomas Graf2005-11-051-89/+84
| | | | | | | | | | | | | | | | | | Introduces a function gred_change_vq() acting as a central point to change VQ parameters. Fixes priority inheritance in rio mode when the default DP equals 0. Adds proper locking during changes. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Report out-of-bound DPs as illegalThomas Graf2005-11-051-6/+3
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Use a central table definition change procedureThomas Graf2005-11-051-52/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduces a function gred_change_table_def() acting as a central point to change the table definition. Adds missing validations for table definition: MAX_DPs > DPs > 0 and def_DP < DPs thus fixing possible invalid memory reference oopses. Only root could do it but having a typo crashing the machine is a bit hard. Adds missing locking while changing the table definition, the operation of changing the number of DPs and removing shadowed VQs may not be interrupted by a dequeue. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Dump table definitionThomas Graf2005-11-051-0/+6
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Cleanup dumpingThomas Graf2005-11-051-58/+34
| | | | | | | | | | | | | | | | | | Avoids the allocation of a buffer by appending the VQs directly to the skb and simplifies the code by using the appropriate message construction macros. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Transform grio to GRED_RIO_MODEThomas Graf2005-11-051-8/+28
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: GRED: Cleanup equalize flag and add new WRED mode detectionThomas Graf2005-11-051-22/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduces a flags variable using bitops and transforms eqp to use it. Converts the conditions of the form (wred && rio) to (wred) since wred can only be enabled in rio mode anyway. The patch also improves WRED mode detection. The current behaviour does not allow WRED mode to be turned off again without removing the whole qdisc first. The new algorithm checks each VQ against each other looking for equal priorities every time a VQ is changed or added. The performance is poor, O(n**2), but it's used only during administrative tasks and the number of VQs is strictly limited. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: RED: Cleanup and remove unnecessary codeThomas Graf2005-11-051-44/+21
| | | | | | | | | | | | | | | | | | | | Removes the skb trimming code which is not needed since we never touch the skb upon failure. Removes unnecessary includes, initializers, and simplifies the code a bit. Removes Jamal's obsolete email addresses upon his own request. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: RED: Dont start idle periods while already idlingThomas Graf2005-11-051-2/+4
| | | | | | | | | | | | | | We should not interrupt and restart an idle period while idling already. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: RED: Use generic queue management interfaceThomas Graf2005-11-051-29/+13
| | | | | | | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
| * [PKT_SCHED]: RED: Use new generic red interfaceThomas Graf2005-11-051-247/+74
| | | | | | | | | | | | | | | | | | | | Simplifies code a lot by separating the red algorithm and the queueing logic. We now differentiate between probability marks and forced marks but sum them together again to not break backwards compatibility. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* | [NETEM]: Add version stringStephen Hemminger2005-11-051-0/+3
| | | | | | | | | | | | | | Add a version string to help support issues. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* | [NETEM]: Support time based reorderingStephen Hemminger2005-11-051-1/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change netem to support packets getting reordered because of variations in delay. Introduce a special case version of FIFO that queues packets in order based on the netem delay. Since netem is classful, those users that don't want jitter based reordering can just insert a pfifo instead of the default. This required changes to generic skbuff code to allow finer grain manipulation of sk_buff_head. Insertion into the middle and reverse walk. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* | [NETEM]: use PSCHED_LESSStephen Hemminger2005-11-051-12/+22
|/ | | | | | | | | Convert netem to use PSCHED_LESS and warn if requeue fails. With some of the psched clock sources, the subtraction doesn't work always work right without wrapping. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [PKT_SCHED]: Rework QoS and/or fair queueing configurationThomas Graf2005-11-031-199/+195
| | | | | | | | | | | | | Make "QoS and/or fair queueing" have its own menu, it's too big to be inlined into "Network options". Remove the obsolete NET_QOS option. Automatically select NET_CLS if needed. Do the same for NET_ESTIMATOR but allow it to be selected manually for statistical purposes. Add comments to separate queueing from classification. Fix dependencies and ordering of classifiers. Improve descriptions/help texts and remove outdated pieces. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [NET]: Disable NET_SCH_CLK_CPU for SMP x86 hostsAndi Kleen2005-10-131-1/+3
| | | | | | | | | | Opterons with frequency scaling have fully unsynchronized TSCs running at different frequencies, so using TSCs there is not a good idea. Also some other x86 boxes have this problem. gettimeofday should be good enough, so just disable it. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [INET]: speedup inet (tcp/dccp) lookupsEric Dumazet2005-10-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arnaldo and I agreed it could be applied now, because I have other pending patches depending on this one (Thank you Arnaldo) (The other important patch moves skc_refcnt in a separate cache line, so that the SMP/NUMA performance doesnt suffer from cache line ping pongs) 1) First some performance data : -------------------------------- tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established() The most time critical code is : sk_for_each(sk, node, &head->chain) { if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif)) goto hit; /* You sunk my battleship! */ } The sk_for_each() does use prefetch() hints but only the begining of "struct sock" is prefetched. As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far away from the begining of "struct sock", it has to bring into CPU cache cold cache line. Each iteration has to use at least 2 cache lines. This can be problematic if some chains are very long. 2) The goal ----------- The idea I had is to change things so that INET_MATCH() may return FALSE in 99% of cases only using the data already in the CPU cache, using one cache line per iteration. 3) Description of the patch --------------------------- Adds a new 'unsigned int skc_hash' field in 'struct sock_common', filling a 32 bits hole on 64 bits platform. struct sock_common { unsigned short skc_family; volatile unsigned char skc_state; unsigned char skc_reuse; int skc_bound_dev_if; struct hlist_node skc_node; struct hlist_node skc_bind_node; atomic_t skc_refcnt; + unsigned int skc_hash; struct proto *skc_prot; }; Store in this 32 bits field the full hash, not masked by (ehash_size - 1) Using this full hash as the first comparison done in INET_MATCH permits us immediatly skip the element without touching a second cache line in case of a miss. Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to sk_hash and tw_hash) already contains the slot number if we mask with (ehash_size - 1) File include/net/inet_hashtables.h 64 bits platforms : #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \ ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) 32bits platforms: #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) && \ (inet_sk(__sk)->daddr == (__saddr)) && \ (inet_sk(__sk)->rcv_saddr == (__daddr)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) - Adds a prefetch(head->chain.first) in __inet_lookup_established()/__tcp_v4_check_established() and __inet6_lookup_established()/__tcp_v6_check_established() and __dccp_v4_check_established() to bring into cache the first element of the list, before the {read|write}_lock(&head->lock); Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] timer initialization cleanup: DEFINE_TIMERIngo Molnar2005-09-091-1/+1
| | | | | | | | | | Clean up timer initialization by introducing DEFINE_TIMER a'la DEFINE_SPINLOCK. Build and boot-tested on x86. A similar patch has been been in the -RT tree for some time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [LIB]: Make TEXTSEARCH_BM plain tristate like the othersDavid S. Miller2005-08-291-0/+1
| | | | | | And select it when the relevant modules are enabled. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Convert netlink users to use group numbers instead of bitmasksPatrick McHardy2005-08-293-7/+7
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Deinline netif_carrier_{on,off}().Denis Vlasenko2005-08-291-0/+16
| | | | | | | | | | | | | | # grep -r 'netif_carrier_o[nf]' linux-2.6.12 | wc -l 246 # size vmlinux.org vmlinux.carrier text data bss dec hex filename 4339634 1054414 259296 5653344 564360 vmlinux.org 4337710 1054414 259296 5651420 563bdc vmlinux.carrier And this ain't an allyesconfig kernel! Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Kill skb->tc_classidPatrick McHardy2005-08-297-12/+8
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix missing qdisc_destroy() in qdisc_create_dflt()Thomas Graf2005-08-231-0/+1
| | | | | | | | | | | | | | | | qdisc_create_dflt() is missing to destroy the newly allocated default qdisc if the initialization fails resulting in leaks of all kinds. The only caller in mainline which may trigger this bug is sch_tbf.c in tbf_create_dflt_qdisc(). Note: qdisc_create_dflt() doesn't fulfill the official locking requirements of qdisc_destroy() but since the qdisc could never be seen by the outside world this doesn't matter and it can stay as-is until the locking of pkt_sched is cleaned up. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [EMATCH]: Remove feature ifdefs in meta ematch.Patrick McHardy2005-07-241-8/+8
| | | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: em_meta: Kill TCF_META_ID_{INDEV,SECURITY,TCVERDICT}David S. Miller2005-07-221-25/+3
| | | | | | | | More unusable TCF_META_* match types that need to get eliminated before 2.6.13 goes out the door. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Thomas Graf <tgraf@suug.ch>
* [PKT_SCHED]: Kill TCF_META_ID_REALDEV from meta ematch.David S. Miller2005-07-221-12/+0
| | | | | | | | | It won't exist any longer when we shrink the SKB in 2.6.14, and we should kill this off before anyone in userspace starts using it. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Thomas Graf <tgraf@suug.ch>
* [EMATCH]: Kill TCF_META_ID_TCCLASSID reference from meta ematch as well.David S. Miller2005-07-181-6/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Reduce branch mispredictions in pfifo_fast_dequeueThomas Graf2005-07-181-4/+3
| | | | | | | | | | | | | The current call to __qdisc_dequeue_head leads to a branch misprediction for every loop iteration, the fact that the most common priority is 2 makes this even worse. This issue has been brought up by Eric Dumazet <dada1@cosmosbay.com> but unlike his solution which was to manually unroll the loop, this approach preserves the possibility to increase the number of bands at compile time. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Remove debugging leftover from textsearch ematchThomas Graf2005-07-181-3/+0
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: move config options out to individual protocolsSam Ravnborg2005-07-111-0/+37
| | | | | | | | | | | | | | | | | Move the protocol specific config options out to the specific protocols. With this change net/Kconfig now starts to become readable and serve as a good basis for further re-structuring. The menu structure is left almost intact, except that indention is fixed in most cases. Most visible are the INET changes where several "depends on INET" are replaced with a single ifdef INET / endif pair. Several new files were created to accomplish this change - they are small but serve the purpose that config options are now distributed out where they belongs. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Transform skb_queue_len() binary tests into skb_queue_empty()David S. Miller2005-07-081-1/+1
| | | | | | | | | | | | | This is part of the grand scheme to eliminate the qlen member of skb_queue_head, and subsequently remove the 'list' member of sk_buff. Most users of skb_queue_len() want to know if the queue is empty or not, and that's trivially done with skb_queue_empty() which doesn't use the skb_queue_head->qlen member and instead uses the queue list emptyness as the test. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Blackhole queueing disciplineThomas Graf2005-07-052-1/+55
| | | | | | | | | | | | Useful in combination with classful qdiscs to drop or temporary disable certain flows, e.g. one could block specific ds flows with dsmark. Unlike the noop qdisc it can be controlled by the user and statistic accounting is done. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Report rate estimator configuration errors during qdisc allocationThomas Graf2005-07-051-5/+17
| | | | | | | | | | | | | | Current behaviour is to not report an error if a rate estimator is created together with a qdisc and the configuration of the rate estimator is bogus. This leads to unexpected behaviour because the user is not notified. New behaviour is to report the error and let the whole qdisc creation operation fail so the user is able to fix his mistake. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Cleanup qdisc creation and alignment macrosThomas Graf2005-07-052-43/+33
| | | | | | | | | Adds qdisc_alloc() to share code between qdisc_create() and qdisc_create_dflt(). Hides the qdisc alignment behind macros and makes use of them. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>