diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-05-25 12:22:58 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-05-25 12:22:58 -0700 |
commit | 7e062cda7d90543ac8c7700fc7c5527d0c0f22ad (patch) | |
tree | 2f1602595d9416be41cc2e88a659ba4c145734b9 /drivers/net/ipa | |
parent | 5d1772b1739b085721431eef0c0400f3aff01abf (diff) | |
parent | 57d7becda9c9e612e6b00676f2eecfac3e719e88 (diff) |
Merge tag 'net-next-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"Core
----
- Support TCPv6 segmentation offload with super-segments larger than
64k bytes using the IPv6 Jumbogram extension header (AKA BIG TCP).
- Generalize skb freeing deferral to per-cpu lists, instead of
per-socket lists.
- Add a netdev statistic for packets dropped due to L2 address
mismatch (rx_otherhost_dropped).
- Continue work annotating skb drop reasons.
- Accept alternative netdev names (ALT_IFNAME) in more netlink
requests.
- Add VLAN support for AF_PACKET SOCK_RAW GSO.
- Allow receiving skb mark from the socket as a cmsg.
- Enable memcg accounting for veth queues, sysctl tables and IPv6.
BPF
---
- Add libbpf support for User Statically-Defined Tracing (USDTs).
- Speed up symbol resolution for kprobes multi-link attachments.
- Support storing typed pointers to referenced and unreferenced
objects in BPF maps.
- Add support for BPF link iterator.
- Introduce access to remote CPU map elements in BPF per-cpu map.
- Allow middle-of-the-road settings for the
kernel.unprivileged_bpf_disabled sysctl.
- Implement basic types of dynamic pointers e.g. to allow for
dynamically sized ringbuf reservations without extra memory copies.
Protocols
---------
- Retire port only listening_hash table, add a second bind table
hashed by port and address. Avoid linear list walk when binding to
very popular ports (e.g. 443).
- Add bridge FDB bulk flush filtering support allowing user space to
remove all FDB entries matching a condition.
- Introduce accept_unsolicited_na sysctl for IPv6 to implement
router-side changes for RFC9131.
- Support for MPTCP path manager in user space.
- Add MPTCP support for fallback to regular TCP for connections that
have never connected additional subflows or transmitted
out-of-sequence data (partial support for RFC8684 fallback).
- Avoid races in MPTCP-level window tracking, stabilize and improve
throughput.
- Support lockless operation of GRE tunnels with seq numbers enabled.
- WiFi support for host based BSS color collision detection.
- Add support for SO_TXTIME/SCM_TXTIME on CAN sockets.
- Support transmission w/o flow control in CAN ISOTP (ISO 15765-2).
- Support zero-copy Tx with TLS 1.2 crypto offload (sendfile).
- Allow matching on the number of VLAN tags via tc-flower.
- Add tracepoint for tcp_set_ca_state().
Driver API
----------
- Improve error reporting from classifier and action offload.
- Add support for listing line cards in switches (devlink).
- Add helpers for reporting page pool statistics with ethtool -S.
- Add support for reading clock cycles when using PTP virtual clocks,
instead of having the driver convert to time before reporting. This
makes it possible to report time from different vclocks.
- Support configuring low-latency Tx descriptor push via ethtool.
- Separate Clause 22 and Clause 45 MDIO accesses more explicitly.
New hardware / drivers
----------------------
- Ethernet:
- Marvell's Octeon NIC PCI Endpoint support (octeon_ep)
- Sunplus SP7021 SoC (sp7021_emac)
- Add support for Renesas RZ/V2M (in ravb)
- Add support for MediaTek mt7986 switches (in mtk_eth_soc)
- Ethernet PHYs:
- ADIN1100 industrial PHYs (w/ 10BASE-T1L and SQI reporting)
- TI DP83TD510 PHY
- Microchip LAN8742/LAN88xx PHYs
- WiFi:
- Driver for pureLiFi X, XL, XC devices (plfxlc)
- Driver for Silicon Labs devices (wfx)
- Support for WCN6750 (in ath11k)
- Support Realtek 8852ce devices (in rtw89)
- Mobile:
- MediaTek T700 modems (Intel 5G 5000 M.2 cards)
- CAN:
- ctucanfd: add support for CTU CAN FD open-source IP core from
Czech Technical University in Prague
Drivers
-------
- Delete a number of old drivers still using virt_to_bus().
- Ethernet NICs:
- intel: support TSO on tunnels MPLS
- broadcom: support multi-buffer XDP
- nfp: support VF rate limiting
- sfc: use hardware tx timestamps for more than PTP
- mlx5: multi-port eswitch support
- hyper-v: add support for XDP_REDIRECT
- atlantic: XDP support (including multi-buffer)
- macb: improve real-time perf by deferring Tx processing to NAPI
- High-speed Ethernet switches:
- mlxsw: implement basic line card information querying
- prestera: add support for traffic policing on ingress and egress
- Embedded Ethernet switches:
- lan966x: add support for packet DMA (FDMA)
- lan966x: add support for PTP programmable pins
- ti: cpsw_new: enable bc/mc storm prevention
- Qualcomm 802.11ax WiFi (ath11k):
- Wake-on-WLAN support for QCA6390 and WCN6855
- device recovery (firmware restart) support
- support setting Specific Absorption Rate (SAR) for WCN6855
- read country code from SMBIOS for WCN6855/QCA6390
- enable keep-alive during WoWLAN suspend
- implement remain-on-channel support
- MediaTek WiFi (mt76):
- support Wireless Ethernet Dispatch offloading packet movement
between the Ethernet switch and WiFi interfaces
- non-standard VHT MCS10-11 support
- mt7921 AP mode support
- mt7921 IPv6 NS offload support
- Ethernet PHYs:
- micrel: ksz9031/ksz9131: cabletest support
- lan87xx: SQI support for T1 PHYs
- lan937x: add interrupt support for link detection"
* tag 'net-next-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1809 commits)
ptp: ocp: Add firmware header checks
ptp: ocp: fix PPS source selector debugfs reporting
ptp: ocp: add .init function for sma_op vector
ptp: ocp: vectorize the sma accessor functions
ptp: ocp: constify selectors
ptp: ocp: parameterize input/output sma selectors
ptp: ocp: revise firmware display
ptp: ocp: add Celestica timecard PCI ids
ptp: ocp: Remove #ifdefs around PCI IDs
ptp: ocp: 32-bit fixups for pci start address
Revert "net/smc: fix listen processing for SMC-Rv2"
ath6kl: Use cc-disable-warning to disable -Wdangling-pointer
selftests/bpf: Dynptr tests
bpf: Add dynptr data slices
bpf: Add bpf_dynptr_read and bpf_dynptr_write
bpf: Dynptr support for ring buffers
bpf: Add bpf_dynptr_from_mem for local dynptrs
bpf: Add verifier support for dynptrs
bpf: Suppress 'passing zero to PTR_ERR' warning
bpf: Introduce bpf_arch_text_invalidate for bpf_prog_pack
...
Diffstat (limited to 'drivers/net/ipa')
-rw-r--r-- | drivers/net/ipa/gsi.c | 20 | ||||
-rw-r--r-- | drivers/net/ipa/gsi.h | 1 | ||||
-rw-r--r-- | drivers/net/ipa/gsi_reg.h | 2 | ||||
-rw-r--r-- | drivers/net/ipa/gsi_trans.c | 38 | ||||
-rw-r--r-- | drivers/net/ipa/gsi_trans.h | 24 | ||||
-rw-r--r-- | drivers/net/ipa/ipa.h | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_cmd.c | 78 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_cmd.h | 11 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v3.1.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v3.5.1.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v4.11.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v4.2.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v4.5.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data-v4.9.c | 2 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_data.h | 70 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_endpoint.c | 214 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_endpoint.h | 85 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_interrupt.c | 6 | ||||
-rw-r--r-- | drivers/net/ipa/ipa_modem.c | 13 |
19 files changed, 289 insertions, 287 deletions
diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c index a701178a1d13..9cfe84319ee4 100644 --- a/drivers/net/ipa/gsi.c +++ b/drivers/net/ipa/gsi.c @@ -1179,15 +1179,15 @@ static void gsi_isr_gp_int1(struct gsi *gsi) * Similarly, we could get an error back when updating flow control * on a channel because it's not in the proper state. * - * In either case, we silently ignore a CHANNEL_NOT_RUNNING error - * if we receive it. + * In either case, we silently ignore a INCORRECT_CHANNEL_STATE + * error if we receive it. */ val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET); result = u32_get_bits(val, GENERIC_EE_RESULT_FMASK); switch (result) { case GENERIC_EE_SUCCESS: - case GENERIC_EE_CHANNEL_NOT_RUNNING: + case GENERIC_EE_INCORRECT_CHANNEL_STATE: gsi->result = 0; break; @@ -1492,12 +1492,8 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel) if (index == ring->index % ring->count) return NULL; - /* Get the transaction for the latest completed event. Take a - * reference to keep it from completing before we give the events - * for this and previous transactions back to the hardware. - */ + /* Get the transaction for the latest completed event. */ trans = gsi_event_trans(channel, gsi_ring_virt(ring, index - 1)); - refcount_inc(&trans->refcount); /* For RX channels, update each completed transaction with the number * of bytes that were actually received. For TX channels, report @@ -1512,9 +1508,7 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel) gsi_trans_move_complete(trans); /* Tell the hardware we've handled these events */ - gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index); - - gsi_trans_free(trans); + gsi_evt_ring_doorbell(gsi, evt_ring_id, index); return gsi_channel_trans_complete(channel); } @@ -1616,8 +1610,8 @@ static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id) gsi_channel_program(channel, true); if (channel->toward_ipa) - netif_tx_napi_add(&gsi->dummy_dev, &channel->napi, - gsi_channel_poll, NAPI_POLL_WEIGHT); + netif_napi_add_tx(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll); else netif_napi_add(&gsi->dummy_dev, &channel->napi, gsi_channel_poll, NAPI_POLL_WEIGHT); diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h index 9cc657658811..5d66116b46b0 100644 --- a/drivers/net/ipa/gsi.h +++ b/drivers/net/ipa/gsi.h @@ -84,7 +84,6 @@ struct gsi_trans_info { struct gsi_trans_pool pool; /* transaction pool */ struct gsi_trans_pool sg_pool; /* scatterlist pool */ struct gsi_trans_pool cmd_pool; /* command payload DMA pool */ - struct gsi_trans_pool info_pool;/* command information pool */ struct gsi_trans **map; /* TRE -> transaction map */ spinlock_t spinlock; /* protects updates to the lists */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h index 8906f4381032..5bd8b31656d3 100644 --- a/drivers/net/ipa/gsi_reg.h +++ b/drivers/net/ipa/gsi_reg.h @@ -515,7 +515,7 @@ enum gsi_err_type { /** enum gsi_generic_ee_result - GENERIC_EE_RESULT field values in SCRATCH_0 */ enum gsi_generic_ee_result { GENERIC_EE_SUCCESS = 0x1, - GENERIC_EE_CHANNEL_NOT_RUNNING = 0x2, + GENERIC_EE_INCORRECT_CHANNEL_STATE = 0x2, GENERIC_EE_INCORRECT_DIRECTION = 0x3, GENERIC_EE_INCORRECT_CHANNEL_TYPE = 0x4, GENERIC_EE_INCORRECT_CHANNEL = 0x5, diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c index 87e1d43c118c..55f8fe7d2668 100644 --- a/drivers/net/ipa/gsi_trans.c +++ b/drivers/net/ipa/gsi_trans.c @@ -410,10 +410,8 @@ void gsi_trans_free(struct gsi_trans *trans) /* Add an immediate command to a transaction */ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, - dma_addr_t addr, enum dma_data_direction direction, - enum ipa_cmd_opcode opcode) + dma_addr_t addr, enum ipa_cmd_opcode opcode) { - struct ipa_cmd_info *info; u32 which = trans->used++; struct scatterlist *sg; @@ -438,9 +436,7 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, sg_dma_address(sg) = addr; sg_dma_len(sg) = size; - info = &trans->info[which]; - info->opcode = opcode; - info->direction = direction; + trans->cmd_opcode[which] = opcode; } /* Add a page transfer to a transaction. It will fill the only TRE. */ @@ -556,10 +552,10 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db) struct gsi_ring *ring = &channel->tre_ring; enum ipa_cmd_opcode opcode = IPA_CMD_NONE; bool bei = channel->toward_ipa; - struct ipa_cmd_info *info; struct gsi_tre *dest_tre; struct scatterlist *sg; u32 byte_count = 0; + u8 *cmd_opcode; u32 avail; u32 i; @@ -570,7 +566,7 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db) * If there is no info array we're doing a simple data * transfer request, whose opcode is IPA_CMD_NONE. */ - info = trans->info ? &trans->info[0] : NULL; + cmd_opcode = channel->command ? &trans->cmd_opcode[0] : NULL; avail = ring->count - ring->index % ring->count; dest_tre = gsi_ring_virt(ring, ring->index); for_each_sg(trans->sgl, sg, trans->used, i) { @@ -581,8 +577,8 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db) byte_count += len; if (!avail--) dest_tre = gsi_ring_virt(ring, 0); - if (info) - opcode = info++->opcode; + if (cmd_opcode) + opcode = *cmd_opcode++; gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); dest_tre++; @@ -637,28 +633,6 @@ out_trans_free: gsi_trans_free(trans); } -/* Commit a GSI transaction and wait for it to complete, with timeout */ -int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, - unsigned long timeout) -{ - unsigned long timeout_jiffies = msecs_to_jiffies(timeout); - unsigned long remaining = 1; /* In case of empty transaction */ - - if (!trans->used) - goto out_trans_free; - - refcount_inc(&trans->refcount); - - __gsi_trans_commit(trans, true); - - remaining = wait_for_completion_timeout(&trans->completion, - timeout_jiffies); -out_trans_free: - gsi_trans_free(trans); - - return remaining ? 0 : -ETIMEDOUT; -} - /* Process the completion of a transaction; called while polling */ void gsi_trans_complete(struct gsi_trans *trans) { diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h index af379b49299e..020c3b32de1d 100644 --- a/drivers/net/ipa/gsi_trans.h +++ b/drivers/net/ipa/gsi_trans.h @@ -22,6 +22,9 @@ struct gsi; struct gsi_trans; struct gsi_trans_pool; +/* Maximum number of TREs in an IPA immediate command transaction */ +#define IPA_COMMAND_TRANS_TRE_MAX 8 + /** * struct gsi_trans - a GSI transaction * @@ -34,8 +37,8 @@ struct gsi_trans_pool; * @used: Number of TREs *used* (could be less than tre_count) * @len: Total # of transfer bytes represented in sgl[] (set by core) * @data: Preserved but not touched by the core transaction code + * @cmd_opcode: Array of command opcodes (command channel only) * @sgl: An array of scatter/gather entries managed by core code - * @info: Array of command information structures (command channel) * @direction: DMA transfer direction (DMA_NONE for commands) * @refcount: Reference count used for destruction * @completion: Completed when the transaction completes @@ -57,9 +60,11 @@ struct gsi_trans { u8 used; /* # entries used in sgl[] */ u32 len; /* total # bytes across sgl[] */ - void *data; + union { + void *data; + u8 cmd_opcode[IPA_COMMAND_TRANS_TRE_MAX]; + }; struct scatterlist *sgl; - struct ipa_cmd_info *info; /* array of entries, or null */ enum dma_data_direction direction; refcount_t refcount; @@ -165,12 +170,10 @@ void gsi_trans_free(struct gsi_trans *trans); * @buf: Buffer pointer for command payload * @size: Number of bytes in buffer * @addr: DMA address for payload - * @direction: Direction of DMA transfer (or DMA_NONE if none required) * @opcode: IPA immediate command opcode */ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, - dma_addr_t addr, enum dma_data_direction direction, - enum ipa_cmd_opcode opcode); + dma_addr_t addr, enum ipa_cmd_opcode opcode); /** * gsi_trans_page_add() - Add a page transfer to a transaction @@ -206,15 +209,6 @@ void gsi_trans_commit(struct gsi_trans *trans, bool ring_db); void gsi_trans_commit_wait(struct gsi_trans *trans); /** - * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for - * it to complete, with timeout - * @trans: Transaction to commit - * @timeout: Timeout period (in milliseconds) - */ -int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, - unsigned long timeout); - -/** * gsi_trans_read_byte() - Issue a single byte read TRE on a channel * @gsi: GSI pointer * @channel_id: Channel on which to read a byte diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h index 9fc880eb7e3a..4fc3c72359f5 100644 --- a/drivers/net/ipa/ipa.h +++ b/drivers/net/ipa/ipa.h @@ -62,6 +62,7 @@ struct ipa_interrupt; * @initialized: Bit mask indicating endpoints initialized * @set_up: Bit mask indicating endpoints set up * @enabled: Bit mask indicating endpoints enabled + * @modem_tx_count: Number of defined modem TX endoints * @endpoint: Array of endpoint information * @channel_map: Mapping of GSI channel to IPA endpoint * @name_map: Mapping of IPA endpoint name to IPA endpoint @@ -114,6 +115,7 @@ struct ipa { u32 set_up; u32 enabled; + u32 modem_tx_count; struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; struct ipa_endpoint *channel_map[GSI_CHANNEL_COUNT_MAX]; struct ipa_endpoint *name_map[IPA_ENDPOINT_COUNT]; diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c index d57472ea077f..e58cd4478fd3 100644 --- a/drivers/net/ipa/ipa_cmd.c +++ b/drivers/net/ipa/ipa_cmd.c @@ -26,14 +26,13 @@ * other than data transfer to another endpoint. * * Immediate commands are represented by GSI transactions just like other - * transfer requests, represented by a single GSI TRE. Each immediate - * command has a well-defined format, having a payload of a known length. - * This allows the transfer element's length field to be used to hold an - * immediate command's opcode. The payload for a command resides in DRAM - * and is described by a single scatterlist entry in its transaction. - * Commands do not require a transaction completion callback. To commit - * an immediate command transaction, either gsi_trans_commit_wait() or - * gsi_trans_commit_wait_timeout() is used. + * transfer requests, and use a single GSI TRE. Each immediate command + * has a well-defined format, having a payload of a known length. This + * allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in AP + * memory and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback, and are + * (currently) always issued using gsi_trans_commit_wait(). */ /* Some commands can wait until indicated pipeline stages are clear */ @@ -350,7 +349,6 @@ int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max) { struct gsi_trans_info *trans_info = &channel->trans_info; struct device *dev = channel->gsi->dev; - int ret; /* This is as good a place as any to validate build constants */ ipa_cmd_validate_build(); @@ -359,20 +357,9 @@ int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max) * a single transaction can require up to tlv_count of them, * so we treat them as if that many can be allocated at once. */ - ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool, - sizeof(union ipa_cmd_payload), - tre_max, channel->tlv_count); - if (ret) - return ret; - - /* Each TRE needs a command info structure */ - ret = gsi_trans_pool_init(&trans_info->info_pool, - sizeof(struct ipa_cmd_info), - tre_max, channel->tlv_count); - if (ret) - gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); - - return ret; + return gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool, + sizeof(union ipa_cmd_payload), + tre_max, channel->tlv_count); } void ipa_cmd_pool_exit(struct gsi_channel *channel) @@ -380,7 +367,6 @@ void ipa_cmd_pool_exit(struct gsi_channel *channel) struct gsi_trans_info *trans_info = &channel->trans_info; struct device *dev = channel->gsi->dev; - gsi_trans_pool_exit(&trans_info->info_pool); gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); } @@ -403,7 +389,6 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans, dma_addr_t hash_addr) { struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); - enum dma_data_direction direction = DMA_TO_DEVICE; struct ipa_cmd_hw_ip_fltrt_init *payload; union ipa_cmd_payload *cmd_payload; dma_addr_t payload_addr; @@ -434,7 +419,7 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans, payload->nhash_rules_addr = cpu_to_le64(addr); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } /* Initialize header space in IPA-local memory */ @@ -443,7 +428,6 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, { struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL; - enum dma_data_direction direction = DMA_TO_DEVICE; struct ipa_cmd_hw_hdr_init_local *payload; union ipa_cmd_payload *cmd_payload; dma_addr_t payload_addr; @@ -465,7 +449,7 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, payload->flags = cpu_to_le32(flags); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, @@ -522,7 +506,7 @@ void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, payload->clear_options = cpu_to_le32(options); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - DMA_NONE, opcode); + opcode); } /* Skip IP packet processing on the next data transfer on a TX channel */ @@ -530,7 +514,6 @@ static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id) { struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT; - enum dma_data_direction direction = DMA_TO_DEVICE; struct ipa_cmd_ip_packet_init *payload; union ipa_cmd_payload *cmd_payload; dma_addr_t payload_addr; @@ -542,7 +525,7 @@ static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id) IPA_PACKET_INIT_DEST_ENDPOINT_FMASK); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } /* Use a DMA command to read or write a block of IPA-resident memory */ @@ -553,7 +536,6 @@ void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size, enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM; struct ipa_cmd_hw_dma_mem_mem *payload; union ipa_cmd_payload *cmd_payload; - enum dma_data_direction direction; dma_addr_t payload_addr; u16 flags; @@ -584,17 +566,14 @@ void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size, payload->flags = cpu_to_le16(flags); payload->system_addr = cpu_to_le64(addr); - direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; - gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans) { struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS; - enum dma_data_direction direction = DMA_TO_DEVICE; struct ipa_cmd_ip_packet_tag_status *payload; union ipa_cmd_payload *cmd_payload; dma_addr_t payload_addr; @@ -605,14 +584,13 @@ static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans) payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } /* Issue a small command TX data transfer */ static void ipa_cmd_transfer_add(struct gsi_trans *trans) { struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); - enum dma_data_direction direction = DMA_TO_DEVICE; enum ipa_cmd_opcode opcode = IPA_CMD_NONE; union ipa_cmd_payload *payload; dma_addr_t payload_addr; @@ -621,7 +599,7 @@ static void ipa_cmd_transfer_add(struct gsi_trans *trans) payload = ipa_cmd_payload_alloc(ipa, &payload_addr); gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, - direction, opcode); + opcode); } /* Add immediate commands to a transaction to clear the hardware pipeline */ @@ -661,28 +639,16 @@ void ipa_cmd_pipeline_clear_wait(struct ipa *ipa) wait_for_completion(&ipa->completion); } -static struct ipa_cmd_info * -ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count) -{ - struct gsi_channel *channel; - - channel = &endpoint->ipa->gsi.channel[endpoint->channel_id]; - - return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count); -} - /* Allocate a transaction for the command TX endpoint */ struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count) { struct ipa_endpoint *endpoint; - struct gsi_trans *trans; - endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + if (WARN_ON(tre_count > IPA_COMMAND_TRANS_TRE_MAX)) + return NULL; - trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, - tre_count, DMA_NONE); - if (trans) - trans->info = ipa_cmd_info_alloc(endpoint, tre_count); + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; - return trans; + return gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, + tre_count, DMA_NONE); } diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h index 05ed7e42e184..9215ddad1010 100644 --- a/drivers/net/ipa/ipa_cmd.h +++ b/drivers/net/ipa/ipa_cmd.h @@ -47,17 +47,6 @@ enum ipa_cmd_opcode { }; /** - * struct ipa_cmd_info - information needed for an IPA immediate command - * - * @opcode: The command opcode. - * @direction: Direction of data transfer for DMA commands - */ -struct ipa_cmd_info { - enum ipa_cmd_opcode opcode; - enum dma_data_direction direction; -}; - -/** * ipa_cmd_table_valid() - Validate a memory region holding a table * @ipa: - IPA pointer * @mem: - IPA memory region descriptor diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c index 8ff351aefd23..00f4e506e6e5 100644 --- a/drivers/net/ipa/ipa_data-v3.1.c +++ b/drivers/net/ipa/ipa_data-v3.1.c @@ -103,6 +103,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -150,6 +151,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 8192, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data-v3.5.1.c b/drivers/net/ipa/ipa_data-v3.5.1.c index d1c466abddb2..b7e32e87733e 100644 --- a/drivers/net/ipa/ipa_data-v3.5.1.c +++ b/drivers/net/ipa/ipa_data-v3.5.1.c @@ -94,6 +94,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -142,6 +143,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 8192, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data-v4.11.c b/drivers/net/ipa/ipa_data-v4.11.c index b1991cc6f0ca..1be823e5c5c2 100644 --- a/drivers/net/ipa/ipa_data-v4.11.c +++ b/drivers/net/ipa/ipa_data-v4.11.c @@ -88,6 +88,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -135,6 +136,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 32768, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data-v4.2.c b/drivers/net/ipa/ipa_data-v4.2.c index 1190a43e8743..683f1f91042f 100644 --- a/drivers/net/ipa/ipa_data-v4.2.c +++ b/drivers/net/ipa/ipa_data-v4.2.c @@ -84,6 +84,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -132,6 +133,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 8192, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data-v4.5.c b/drivers/net/ipa/ipa_data-v4.5.c index 944f72b0f285..79398f286a9c 100644 --- a/drivers/net/ipa/ipa_data-v4.5.c +++ b/drivers/net/ipa/ipa_data-v4.5.c @@ -97,6 +97,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -144,6 +145,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 8192, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data-v4.9.c b/drivers/net/ipa/ipa_data-v4.9.c index 16786bff7ef8..4b96efd05cf2 100644 --- a/drivers/net/ipa/ipa_data-v4.9.c +++ b/drivers/net/ipa/ipa_data-v4.9.c @@ -89,6 +89,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .rx = { .buffer_size = 8192, .pad_align = ilog2(sizeof(u32)), + .aggr_time_limit = 500, }, }, }, @@ -136,6 +137,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = { .aggregation = true, .rx = { .buffer_size = 8192, + .aggr_time_limit = 500, .aggr_close_eof = true, }, }, diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h index dbbeecf6df29..e15eb3cd3e33 100644 --- a/drivers/net/ipa/ipa_data.h +++ b/drivers/net/ipa/ipa_data.h @@ -96,71 +96,9 @@ struct gsi_channel_data { }; /** - * struct ipa_endpoint_tx_data - configuration data for TX endpoints - * @seq_type: primary packet processing sequencer type - * @seq_rep_type: sequencer type for replication processing - * @status_endpoint: endpoint to which status elements are sent - * - * The @status_endpoint is only valid if the endpoint's @status_enable - * flag is set. - */ -struct ipa_endpoint_tx_data { - enum ipa_seq_type seq_type; - enum ipa_seq_rep_type seq_rep_type; - enum ipa_endpoint_name status_endpoint; -}; - -/** - * struct ipa_endpoint_rx_data - configuration data for RX endpoints - * @buffer_size: requested receive buffer size (bytes) - * @pad_align: power-of-2 boundary to which packet payload is aligned - * @aggr_close_eof: whether aggregation closes on end-of-frame - * - * With each packet it transfers, the IPA hardware can perform certain - * transformations of its packet data. One of these is adding pad bytes - * to the end of the packet data so the result ends on a power-of-2 boundary. - * - * It is also able to aggregate multiple packets into a single receive buffer. - * Aggregation is "open" while a buffer is being filled, and "closes" when - * certain criteria are met. One of those criteria is the sender indicating - * a "frame" consisting of several transfers has ended. - */ -struct ipa_endpoint_rx_data { - u32 buffer_size; - u32 pad_align; - bool aggr_close_eof; -}; - -/** - * struct ipa_endpoint_config_data - IPA endpoint hardware configuration - * @resource_group: resource group to assign endpoint to - * @checksum: whether checksum offload is enabled - * @qmap: whether endpoint uses QMAP protocol - * @aggregation: whether endpoint supports aggregation - * @status_enable: whether endpoint uses status elements - * @dma_mode: whether endpoint operates in DMA mode - * @dma_endpoint: peer endpoint, if operating in DMA mode - * @tx: TX-specific endpoint information (see above) - * @rx: RX-specific endpoint information (see above) - */ -struct ipa_endpoint_config_data { - u32 resource_group; - bool checksum; - bool qmap; - bool aggregation; - bool status_enable; - bool dma_mode; - enum ipa_endpoint_name dma_endpoint; - union { - struct ipa_endpoint_tx_data tx; - struct ipa_endpoint_rx_data rx; - }; -}; - -/** * struct ipa_endpoint_data - IPA endpoint configuration data * @filter_support: whether endpoint supports filtering - * @config: hardware configuration (see above) + * @config: hardware configuration * * Not all endpoints support the IPA filtering capability. A filter table * defines the filters to apply for those endpoints that support it. The @@ -168,12 +106,12 @@ struct ipa_endpoint_config_data { * for non-AP endpoints. For this reason we define *all* endpoints used * in the system, and indicate whether they support filtering. * - * The remaining endpoint configuration data applies only to AP endpoints. + * The remaining endpoint configuration data specifies default hardware + * configuration values that apply only to AP endpoints. */ struct ipa_endpoint_data { bool filter_support; - /* Everything else is specified only for AP endpoints */ - struct ipa_endpoint_config_data config; + struct ipa_endpoint_config config; }; /** diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index cea7b2e2ce96..385aa63ab4bb 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -35,7 +35,6 @@ #define IPA_ENDPOINT_QMAP_METADATA_MASK 0x000000ff /* host byte order */ #define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3 -#define IPA_AGGR_TIME_LIMIT 500 /* microseconds */ /** enum ipa_status_opcode - status element opcode hardware values */ enum ipa_status_opcode { @@ -81,6 +80,24 @@ static u32 aggr_byte_limit_max(enum ipa_version version) return field_max(aggr_byte_limit_fmask(false)); } +/* Compute the aggregation size value to use for a given buffer size */ +static u32 ipa_aggr_size_kb(u32 rx_buffer_size, bool aggr_hard_limit) +{ + /* A hard aggregation limit will not be crossed; aggregation closes + * if saving incoming data would cross the hard byte limit boundary. + * + * With a soft limit, aggregation closes *after* the size boundary + * has been crossed. In that case the limit must leave enough space + * after that limit to receive a full MTU of data plus overhead. + */ + if (!aggr_hard_limit) + rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD; + + /* The byte limit is encoded as a number of kilobytes */ + + return rx_buffer_size / SZ_1K; +} + static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, const struct ipa_gsi_endpoint_data *all_data, const struct ipa_gsi_endpoint_data *data) @@ -93,7 +110,9 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, return true; if (!data->toward_ipa) { + const struct ipa_endpoint_rx *rx_config; u32 buffer_size; + u32 aggr_size; u32 limit; if (data->endpoint.filter_support) { @@ -107,8 +126,10 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, if (data->ee_id != GSI_EE_AP) return true; - buffer_size = data->endpoint.config.rx.buffer_size; + rx_config = &data->endpoint.config.rx; + /* The buffer size must hold an MTU plus overhead */ + buffer_size = rx_config->buffer_size; limit = IPA_MTU + IPA_RX_BUFFER_OVERHEAD; if (buffer_size < limit) { dev_err(dev, "RX buffer size too small for RX endpoint %u (%u < %u)\n", @@ -116,26 +137,46 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, return false; } - /* For an endpoint supporting receive aggregation, the - * aggregation byte limit defines the point at which an - * aggregation window will close. It is programmed into the - * IPA hardware as a number of KB. We don't use "hard byte - * limit" aggregation, so we need to supply enough space in - * a receive buffer to hold a complete MTU plus normal skb - * overhead *after* that aggregation byte limit has been - * crossed. - * - * This check just ensures the receive buffer size doesn't - * exceed what's representable in the aggregation limit field. - */ - if (data->endpoint.config.aggregation) { - limit += SZ_1K * aggr_byte_limit_max(ipa->version); - if (buffer_size > limit) { - dev_err(dev, "RX buffer size too large for aggregated RX endpoint %u (%u > %u)\n", - data->endpoint_id, buffer_size, limit); + if (!data->endpoint.config.aggregation) { + bool result = true; - return false; + /* No aggregation; check for bogus aggregation data */ + if (rx_config->aggr_time_limit) { + dev_err(dev, + "time limit with no aggregation for RX endpoint %u\n", + data->endpoint_id); + result = false; + } + + if (rx_config->aggr_hard_limit) { + dev_err(dev, "hard limit with no aggregation for RX endpoint %u\n", + data->endpoint_id); + result = false; } + + if (rx_config->aggr_close_eof) { + dev_err(dev, "close EOF with no aggregation for RX endpoint %u\n", + data->endpoint_id); + result = false; + } + + return result; /* Nothing more to check */ + } + + /* For an endpoint supporting receive aggregation, the byte + * limit defines the point at which aggregation closes. This + * check ensures the receive buffer size doesn't result in a + * limit that exceeds what's representable in the aggregation + * byte limit field. + */ + aggr_size = ipa_aggr_size_kb(buffer_size - NET_SKB_PAD, + rx_config->aggr_hard_limit); + limit = aggr_byte_limit_max(ipa->version); + if (aggr_size > limit) { + dev_err(dev, "aggregated size too large for RX endpoint %u (%u KB > %u KB)\n", + data->endpoint_id, aggr_size, limit); + + return false; } return true; /* Nothing more to check for RX */ @@ -332,7 +373,7 @@ static void ipa_endpoint_suspend_aggr(struct ipa_endpoint *endpoint) { struct ipa *ipa = endpoint->ipa; - if (!endpoint->data->aggregation) + if (!endpoint->config.aggregation) return; /* Nothing to do if the endpoint doesn't have aggregation open */ @@ -401,12 +442,10 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) struct gsi_trans *trans; u32 count; - /* We need one command per modem TX endpoint. We can get an upper - * bound on that by assuming all initialized endpoints are modem->IPA. - * That won't happen, and we could be more precise, but this is fine - * for now. End the transaction with commands to clear the pipeline. + /* We need one command per modem TX endpoint, plus the commands + * that clear the pipeline. */ - count = hweight32(initialized) + ipa_cmd_pipeline_clear_count(); + count = ipa->modem_tx_count + ipa_cmd_pipeline_clear_count(); trans = ipa_cmd_trans_alloc(ipa, count); if (!trans) { dev_err(&ipa->pdev->dev, @@ -437,7 +476,6 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) ipa_cmd_pipeline_clear_add(trans); - /* XXX This should have a 1 second timeout */ gsi_trans_commit_wait(trans); ipa_cmd_pipeline_clear_wait(ipa); @@ -452,7 +490,7 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) u32 val = 0; /* FRAG_OFFLOAD_EN is 0 */ - if (endpoint->data->checksum) { + if (endpoint->config.checksum) { enum ipa_version version = endpoint->ipa->version; if (endpoint->toward_ipa) { @@ -501,7 +539,7 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint) u32 header_size = sizeof(struct rmnet_map_header); /* Without checksum offload, we just have the MAP header */ - if (!endpoint->data->checksum) + if (!endpoint->config.checksum) return header_size; if (version < IPA_VERSION_4_5) { @@ -543,7 +581,7 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) struct ipa *ipa = endpoint->ipa; u32 val = 0; - if (endpoint->data->qmap) { + if (endpoint->config.qmap) { enum ipa_version version = ipa->version; size_t header_size; @@ -582,23 +620,27 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) { u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id); - u32 pad_align = endpoint->data->rx.pad_align; + u32 pad_align = endpoint->config.rx.pad_align; struct ipa *ipa = endpoint->ipa; u32 val = 0; - val |= HDR_ENDIANNESS_FMASK; /* big endian */ + if (endpoint->config.qmap) { + /* We have a header, so we must specify its endianness */ + val |= HDR_ENDIANNESS_FMASK; /* big endian */ - /* A QMAP header contains a 6 bit pad field at offset 0. The RMNet - * driver assumes this field is meaningful in packets it receives, - * and assumes the header's payload length includes that padding. - * The RMNet driver does *not* pad packets it sends, however, so - * the pad field (although 0) should be ignored. - */ - if (endpoint->data->qmap && !endpoint->toward_ipa) { - val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; - /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ - val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK; - /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ + /* A QMAP header contains a 6 bit pad field at offset 0. + * The RMNet driver assumes this field is meaningful in + * packets it receives, and assumes the header's payload + * length includes that padding. The RMNet driver does + * *not* pad packets it sends, however, so the pad field + * (although 0) should be ignored. + */ + if (!endpoint->toward_ipa) { + val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; + /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ + val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK; + /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ + } } /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ @@ -610,7 +652,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) */ if (ipa->version >= IPA_VERSION_4_5) { /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0, so MSB is 0 */ - if (endpoint->data->qmap && !endpoint->toward_ipa) { + if (endpoint->config.qmap && !endpoint->toward_ipa) { u32 offset; offset = offsetof(struct rmnet_map_header, pkt_len); @@ -635,7 +677,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id); /* Note that HDR_ENDIANNESS indicates big endian header fields */ - if (endpoint->data->qmap) + if (endpoint->config.qmap) val = (__force u32)cpu_to_be32(IPA_ENDPOINT_QMAP_METADATA_MASK); iowrite32(val, endpoint->ipa->reg_virt + offset); @@ -649,8 +691,8 @@ static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) if (!endpoint->toward_ipa) return; /* Register not valid for RX endpoints */ - if (endpoint->data->dma_mode) { - enum ipa_endpoint_name name = endpoint->data->dma_endpoint; + if (endpoint->config.dma_mode) { + enum ipa_endpoint_name name = endpoint->config.dma_endpoint; u32 dma_endpoint_id; dma_endpoint_id = endpoint->ipa->name_map[name]->endpoint_id; @@ -665,18 +707,6 @@ static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) iowrite32(val, endpoint->ipa->reg_virt + offset); } -/* Compute the aggregation size value to use for a given buffer size */ -static u32 ipa_aggr_size_kb(u32 rx_buffer_size) -{ - /* We don't use "hard byte limit" aggregation, so we define the - * aggregation limit such that our buffer has enough space *after* - * that limit to receive a full MTU of data, plus overhead. - */ - rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD; - - return rx_buffer_size / SZ_1K; -} - /* Encoded values for AGGR endpoint register fields */ static u32 aggr_byte_limit_encoded(enum ipa_version version, u32 limit) { @@ -695,9 +725,13 @@ static u32 aggr_time_limit_encoded(enum ipa_version version, u32 limit) if (version < IPA_VERSION_4_5) { /* We set aggregation granularity in ipa_hardware_config() */ - limit = DIV_ROUND_CLOSEST(limit, IPA_AGGR_GRANULARITY); + fmask = aggr_time_limit_fmask(true); + val = DIV_ROUND_CLOSEST(limit, IPA_AGGR_GRANULARITY); + WARN(val > field_max(fmask), + "aggr_time_limit too large (%u > %u usec)\n", + val, field_max(fmask) * IPA_AGGR_GRANULARITY); - return u32_encode_bits(limit, aggr_time_limit_fmask(true)); + return u32_encode_bits(val, fmask); } /* IPA v4.5 expresses the time limit using Qtime. The AP has @@ -712,6 +746,9 @@ static u32 aggr_time_limit_encoded(enum ipa_version version, u32 limit) /* Have to use pulse generator 1 (millisecond granularity) */ gran_sel = AGGR_GRAN_SEL_FMASK; val = DIV_ROUND_CLOSEST(limit, 1000); + WARN(val > field_max(fmask), + "aggr_time_limit too large (%u > %u usec)\n", + limit, field_max(fmask) * 1000); } else { /* We can use pulse generator 0 (100 usec granularity) */ gran_sel = 0; @@ -736,28 +773,29 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) enum ipa_version version = endpoint->ipa->version; u32 val = 0; - if (endpoint->data->aggregation) { + if (endpoint->config.aggregation) { if (!endpoint->toward_ipa) { - const struct ipa_endpoint_rx_data *rx_data; + const struct ipa_endpoint_rx *rx_config; + u32 buffer_size; bool close_eof; u32 limit; - rx_data = &endpoint->data->rx; + rx_config = &endpoint->config.rx; val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK); val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK); - limit = ipa_aggr_size_kb(rx_data->buffer_size); + buffer_size = rx_config->buffer_size; + limit = ipa_aggr_size_kb(buffer_size - NET_SKB_PAD, + rx_config->aggr_hard_limit); val |= aggr_byte_limit_encoded(version, limit); - limit = IPA_AGGR_TIME_LIMIT; + limit = rx_config->aggr_time_limit; val |= aggr_time_limit_encoded(version, limit); /* AGGR_PKT_LIMIT is 0 (unlimited) */ - close_eof = rx_data->aggr_close_eof; + close_eof = rx_config->aggr_close_eof; val |= aggr_sw_eof_active_encoded(version, close_eof); - - /* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */ } else { val |= u32_encode_bits(IPA_ENABLE_DEAGGR, AGGR_EN_FMASK); @@ -942,7 +980,7 @@ static void ipa_endpoint_init_rsrc_grp(struct ipa_endpoint *endpoint) struct ipa *ipa = endpoint->ipa; u32 val; - val = rsrc_grp_encoded(ipa->version, endpoint->data->resource_group); + val = rsrc_grp_encoded(ipa->version, endpoint->config.resource_group); iowrite32(val, ipa->reg_virt + offset); } @@ -955,10 +993,10 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) return; /* Register not valid for RX endpoints */ /* Low-order byte configures primary packet processing */ - val |= u32_encode_bits(endpoint->data->tx.seq_type, SEQ_TYPE_FMASK); + val |= u32_encode_bits(endpoint->config.tx.seq_type, SEQ_TYPE_FMASK); /* Second byte configures replicated packet processing */ - val |= u32_encode_bits(endpoint->data->tx.seq_rep_type, + val |= u32_encode_bits(endpoint->config.tx.seq_rep_type, SEQ_REP_TYPE_FMASK); iowrite32(val, endpoint->ipa->reg_virt + offset); @@ -1016,13 +1054,13 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint) offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id); - if (endpoint->data->status_enable) { + if (endpoint->config.status_enable) { val |= STATUS_EN_FMASK; if (endpoint->toward_ipa) { enum ipa_endpoint_name name; u32 status_endpoint_id; - name = endpoint->data->tx.status_endpoint; + name = endpoint->config.tx.status_endpoint; status_endpoint_id = ipa->name_map[name]->endpoint_id; val |= u32_encode_bits(status_endpoint_id, @@ -1046,7 +1084,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, u32 len; int ret; - buffer_size = endpoint->data->rx.buffer_size; + buffer_size = endpoint->config.rx.buffer_size; page = dev_alloc_pages(get_order(buffer_size)); if (!page) return -ENOMEM; @@ -1163,7 +1201,7 @@ static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint, static bool ipa_endpoint_skb_build(struct ipa_endpoint *endpoint, struct page *page, u32 len) { - u32 buffer_size = endpoint->data->rx.buffer_size; + u32 buffer_size = endpoint->config.rx.buffer_size; struct sk_buff *skb; /* Nothing to do if there's no netdev */ @@ -1270,7 +1308,7 @@ static bool ipa_endpoint_status_drop(struct ipa_endpoint *endpoint, static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint, struct page *page, u32 total_len) { - u32 buffer_size = endpoint->data->rx.buffer_size; + u32 buffer_size = endpoint->config.rx.buffer_size; void *data = page_address(page) + NET_SKB_PAD; u32 unused = buffer_size - total_len; u32 resid = total_len; @@ -1300,10 +1338,10 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint, * And if checksum offload is enabled a trailer containing * computed checksum information will be appended. */ - align = endpoint->data->rx.pad_align ? : 1; + align = endpoint->config.rx.pad_align ? : 1; len = le16_to_cpu(status->pkt_len); len = sizeof(*status) + ALIGN(len, align); - if (endpoint->data->checksum) + if (endpoint->config.checksum) len += sizeof(struct rmnet_map_dl_csum_trailer); if (!ipa_endpoint_status_drop(endpoint, status)) { @@ -1347,7 +1385,7 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint, /* Parse or build a socket buffer using the actual received length */ page = trans->data; - if (endpoint->data->status_enable) + if (endpoint->config.status_enable) ipa_endpoint_status_parse(endpoint, page, trans->len); else if (ipa_endpoint_skb_build(endpoint, page, trans->len)) trans->data = NULL; /* Pages have been consumed */ @@ -1381,7 +1419,7 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint, struct page *page = trans->data; if (page) { - u32 buffer_size = endpoint->data->rx.buffer_size; + u32 buffer_size = endpoint->config.rx.buffer_size; __free_pages(page, get_order(buffer_size)); } @@ -1515,7 +1553,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint) * All other cases just need to reset the underlying GSI channel. */ special = ipa->version < IPA_VERSION_4_0 && !endpoint->toward_ipa && - endpoint->data->aggregation; + endpoint->config.aggregation; if (special && ipa_endpoint_aggr_active(endpoint)) ret = ipa_endpoint_reset_rx_aggr(endpoint); else @@ -1549,8 +1587,12 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint) ipa_endpoint_init_hdr_metadata_mask(endpoint); ipa_endpoint_init_mode(endpoint); ipa_endpoint_init_aggr(endpoint); - if (!endpoint->toward_ipa) - ipa_endpoint_init_hol_block_disable(endpoint); + if (!endpoint->toward_ipa) { + if (endpoint->config.rx.holb_drop) + ipa_endpoint_init_hol_block_enable(endpoint, 0); + else + ipa_endpoint_init_hol_block_disable(endpoint); + } ipa_endpoint_init_deaggr(endpoint); ipa_endpoint_init_rsrc_grp(endpoint); ipa_endpoint_init_seq(endpoint); @@ -1830,7 +1872,7 @@ static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name, endpoint->channel_id = data->channel_id; endpoint->endpoint_id = data->endpoint_id; endpoint->toward_ipa = data->toward_ipa; - endpoint->data = &data->endpoint.config; + endpoint->config = data->endpoint.config; ipa->initialized |= BIT(endpoint->endpoint_id); } @@ -1880,6 +1922,8 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count, if (data->endpoint.filter_support) filter_map |= BIT(data->endpoint_id); + if (data->ee_id == GSI_EE_MODEM && data->toward_ipa) + ipa->modem_tx_count++; } if (!ipa_filter_map_valid(ipa, filter_map)) diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h index 12fd5b16c18e..01790c60bee8 100644 --- a/drivers/net/ipa/ipa_endpoint.h +++ b/drivers/net/ipa/ipa_endpoint.h @@ -41,6 +41,87 @@ enum ipa_endpoint_name { #define IPA_ENDPOINT_MAX 32 /* Max supported by driver */ /** + * struct ipa_endpoint_tx - Endpoint configuration for TX endpoints + * @seq_type: primary packet processing sequencer type + * @seq_rep_type: sequencer type for replication processing + * @status_endpoint: endpoint to which status elements are sent + * + * The @status_endpoint is only valid if the endpoint's @status_enable + * flag is set. + */ +struct ipa_endpoint_tx { + enum ipa_seq_type seq_type; + enum ipa_seq_rep_type seq_rep_type; + enum ipa_endpoint_name status_endpoint; +}; + +/** + * struct ipa_endpoint_rx - Endpoint configuration for RX endpoints + * @buffer_size: requested receive buffer size (bytes) + * @pad_align: power-of-2 boundary to which packet payload is aligned + * @aggr_time_limit: time before aggregation closes (microseconds) + * @aggr_hard_limit: whether aggregation closes before or after boundary + * @aggr_close_eof: whether aggregation closes on end-of-frame + * @holb_drop: whether to drop packets to avoid head-of-line blocking + * + * The actual size of the receive buffer is rounded up if necessary + * to be a power-of-2 number of pages. + * + * With each packet it transfers, the IPA hardware can perform certain + * transformations of its packet data. One of these is adding pad bytes + * to the end of the packet data so the result ends on a power-of-2 boundary. + * + * It is also able to aggregate multiple packets into a single receive buffer. + * Aggregation is "open" while a buffer is being filled, and "closes" when + * certain criteria are met. + * + * A time limit can be specified to close aggregation. Aggregation will be + * closed if this period passes after data is first written into a receive + * buffer. If not specified, no time limit is imposed. + * + * Insufficient space available in the receive buffer can close aggregation. + * The aggregation byte limit defines the point (in units of 1024 bytes) in + * the buffer where aggregation closes. With a "soft" aggregation limit, + * aggregation closes when a packet written to the buffer *crosses* that + * aggregation limit. With a "hard" aggregation limit, aggregation will + * close *before* writing a packet that would cross that boundary. + */ +struct ipa_endpoint_rx { + u32 buffer_size; + u32 pad_align; + u32 aggr_time_limit; + bool aggr_hard_limit; + bool aggr_close_eof; + bool holb_drop; +}; + +/** + * struct ipa_endpoint_config - IPA endpoint hardware configuration + * @resource_group: resource group to assign endpoint to + * @checksum: whether checksum offload is enabled + * @qmap: whether endpoint uses QMAP protocol + * @aggregation: whether endpoint supports aggregation + * @status_enable: whether endpoint uses status elements + * @dma_mode: whether endpoint operates in DMA mode + * @dma_endpoint: peer endpoint, if operating in DMA mode + * @tx: TX-specific endpoint information (see above) + * @rx: RX-specific endpoint information (see above) + */ +struct ipa_endpoint_config { + u32 resource_group; + bool checksum; + bool qmap; + bool aggregation; + bool status_enable; + bool dma_mode; + enum ipa_endpoint_name dma_endpoint; + union { + struct ipa_endpoint_tx tx; + struct ipa_endpoint_rx rx; + }; +}; + +/** * enum ipa_replenish_flag: RX buffer replenish flags * * @IPA_REPLENISH_ENABLED: Whether receive buffer replenishing is enabled @@ -60,7 +141,7 @@ enum ipa_replenish_flag { * @channel_id: GSI channel used by the endpoint * @endpoint_id: IPA endpoint number * @toward_ipa: Endpoint direction (true = TX, false = RX) - * @data: Endpoint configuration data + * @config: Default endpoint configuration * @trans_tre_max: Maximum number of TRE descriptors per transaction * @evt_ring_id: GSI event ring used by the endpoint * @netdev: Network device pointer, if endpoint uses one @@ -74,7 +155,7 @@ struct ipa_endpoint { u32 channel_id; u32 endpoint_id; bool toward_ipa; - const struct ipa_endpoint_config_data *data; + struct ipa_endpoint_config config; u32 trans_tre_max; u32 evt_ring_id; diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c index b35170a93b0f..307bed2ee707 100644 --- a/drivers/net/ipa/ipa_interrupt.c +++ b/drivers/net/ipa/ipa_interrupt.c @@ -191,7 +191,8 @@ void ipa_interrupt_add(struct ipa_interrupt *interrupt, struct ipa *ipa = interrupt->ipa; u32 offset; - WARN_ON(ipa_irq >= IPA_IRQ_COUNT); + if (WARN_ON(ipa_irq >= IPA_IRQ_COUNT)) + return; interrupt->handler[ipa_irq] = handler; @@ -208,7 +209,8 @@ ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq) struct ipa *ipa = interrupt->ipa; u32 offset; - WARN_ON(ipa_irq >= IPA_IRQ_COUNT); + if (WARN_ON(ipa_irq >= IPA_IRQ_COUNT)) + return; /* Update the IPA interrupt mask to disable it */ interrupt->enabled &= ~BIT(ipa_irq); diff --git a/drivers/net/ipa/ipa_modem.c b/drivers/net/ipa/ipa_modem.c index 27d87097433f..c8b1c4d9c507 100644 --- a/drivers/net/ipa/ipa_modem.c +++ b/drivers/net/ipa/ipa_modem.c @@ -9,6 +9,8 @@ #include <linux/netdevice.h> #include <linux/skbuff.h> #include <linux/if_rmnet.h> +#include <linux/etherdevice.h> +#include <net/pkt_sched.h> #include <linux/pm_runtime.h> #include <linux/remoteproc/qcom_rproc.h> @@ -127,7 +129,7 @@ ipa_start_xmit(struct sk_buff *skb, struct net_device *netdev) goto err_drop_skb; endpoint = ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]; - if (endpoint->data->qmap && skb->protocol != htons(ETH_P_MAP)) + if (endpoint->config.qmap && skb->protocol != htons(ETH_P_MAP)) goto err_drop_skb; /* The hardware must be powered for us to transmit */ @@ -203,15 +205,20 @@ static const struct net_device_ops ipa_modem_ops = { static void ipa_modem_netdev_setup(struct net_device *netdev) { netdev->netdev_ops = &ipa_modem_ops; - ether_setup(netdev); - /* No header ops (override value set by ether_setup()) */ + netdev->header_ops = NULL; netdev->type = ARPHRD_RAWIP; netdev->hard_header_len = 0; + netdev->min_header_len = ETH_HLEN; + netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = IPA_MTU; netdev->mtu = netdev->max_mtu; netdev->addr_len = 0; + netdev->tx_queue_len = DEFAULT_TX_QUEUE_LEN; netdev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + netdev->priv_flags |= IFF_TX_SKB_SHARING; + eth_broadcast_addr(netdev->broadcast); + /* The endpoint is configured for QMAP */ netdev->needed_headroom = sizeof(struct rmnet_map_header); netdev->needed_tailroom = IPA_NETDEV_TAILROOM; |