diff options
author | Mathieu Desnoyers <mathieu.desnoyers@efficios.com> | 2011-03-16 19:05:27 -0400 |
---|---|---|
committer | Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> | 2011-03-16 19:05:27 -0400 |
commit | fe3d63ccc07ff5a9c32930f89e01ebbafe2354c1 (patch) | |
tree | eca731de2262ffc803434fdd8dee6782e3576724 /net | |
parent | ec9c4352b97d2aae6437983529b3df669bfcbb61 (diff) |
lttng-instrumentation-net-move-network-tracepoints
LTTng instrumentation net: move network tracepoints
One of the objectives when tracing the network subsystems is to trace packet
emission as late as possible and packet arrival as soon as possible in order
to have more accurate timing measurements.
This patch moves the xmit tracepoint to the dev_hard_start_xmit() function,
the last function called that is not network interface specific but is called
for every frame. The tracepoint is also duplicated to trace each segmented
packet when GSO is in effect.
The receive tracepoint is moved earlier in the netif_receive_skb() function.
The tracepoint is also duplicated in the netif_rx() function. This enables
more precise measurements for pre-NAPI drivers.
Signed-off-by: Benjamin Poirier <benjamin.poirier@polymtl.ca>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/dev.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/net/core/dev.c b/net/core/dev.c index bd9e534dea6..97010d1a72a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2127,6 +2127,7 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev, } } + trace_net_dev_xmit(skb); rc = ops->ndo_start_xmit(skb, dev); trace_net_dev_xmit(skb, rc); if (rc == NETDEV_TX_OK) @@ -2148,6 +2149,7 @@ gso: if (dev->priv_flags & IFF_XMIT_DST_RELEASE) skb_dst_drop(nskb); + trace_net_dev_xmit(nskb); rc = ops->ndo_start_xmit(nskb, dev); trace_net_dev_xmit(nskb, rc); if (unlikely(rc != NETDEV_TX_OK)) { @@ -2383,7 +2385,6 @@ int dev_queue_xmit(struct sk_buff *skb) struct Qdisc *q; int rc = -ENOMEM; - trace_net_dev_xmit(skb); /* Disable soft irqs for various locks below. Also * stops preemption for RCU. */ @@ -2750,6 +2751,8 @@ int netif_rx(struct sk_buff *skb) if (netpoll_rx(skb)) return NET_RX_DROP; + trace_net_dev_receive(skb); + if (netdev_tstamp_prequeue) net_timestamp_check(skb); @@ -3065,7 +3068,6 @@ static int __netif_receive_skb(struct sk_buff *skb) } __this_cpu_inc(softnet_data.processed); - trace_net_dev_receive(skb); skb_reset_network_header(skb); skb_reset_transport_header(skb); skb->mac_len = skb->network_header - skb->mac_header; @@ -3184,6 +3186,8 @@ int netif_receive_skb(struct sk_buff *skb) if (skb_defer_rx_timestamp(skb)) return NET_RX_SUCCESS; + trace_net_dev_receive(skb); + #ifdef CONFIG_RPS { struct rps_dev_flow voidflow, *rflow = &voidflow; |