Networking research has a wide range of topics. Each year, we focus on specific topics that can benefit the research community and Meta, and topics may change from year to year. This year, we are soliciting proposals that focus on networking for AI. The example topics include the following:
1. Hardware computational offloading for AI workload: Offloading and accelerating AI compute and inference through programmable switches, smart NIC, and other novel hardware/software co-design techniques at the network layer.
2. End-to-end novel transport designs for distributed AI training: Tackling transport layer challenges for computational fabrics using very high-bandwidth and low-latency interconnects.
3. Scheduling, resource allocation, communication collectives and network joint optimization: AI workload and network joint optimization for resource allocation and dynamic scheduling.
4. New network interconnect architectures for AI training and inference: Include any new data center topologies or interconnects to address scalability and very high bandwidth requirements that are introduced by AI workload (Terabits per accelerator).