How the networks of your clients affect their user experience and your server infrastructure costs in a WebRTC platform

Real-time video applications seem fairly simple at first glance. A user clicks "Join", video and audio start flowing, and everyone can see and hear each other.
But under the hood, WebRTC is making a series of complex networking decisions that determine how media actually travels across the internet. Which ultimately impacts both the final users and your server infrastructure. Including:
- Video quality and latency in your video sessions, and therefore user experience.
- Server resource consumption, and therefore infrastructure cost.
So: not all WebRTC connections between a client and a media server are equal. There are 3 factors to consider: the WebRTC media server that you deploy, how you configure your server's network, and the strictness of your client's firewalls.
Over the first two factors you usually have full control: the WebRTC media server you deploy should support the most modern connectivity mechanisms, and the network where you deploy it should be properly configured to allow optimal connections. The third factor — the client's network — may be under your control if you're deploying an internal solution for a company, but for example in consumer-facing applications with users connecting from their home networks or mobile carriers, you have no control at all.
For these reasons it is crucial to understand how modern WebRTC connectivity works, and the impact of different network conditions in your users' experience and your server infrastructure costs.