Second is the rise of AI-powered systems that depend on fast, reliable access to edge or cloud-based intelligence.
I’m sorry… what?
Is that just word salad? I’m not seeing “AI” as being anything but an excuse there. On the cloud side, AI involves server farms with physical interconnects. Same for endpoint AI, and edge server AI.
Are they saying that accessing these systems depends on fast, reliable access? Like, faster and more reliable than using Google from your web browser over the past 20 years?
The whole point of ML systems is that all the heavy compute and speed dependent stuff happens somewhere with dedicated bandwidth to handle it, and the interface can be slower and lossier because the service can take more steps without guidance.