-Task scheduling on standard operating systems is to some extend non-deterministic, meaning one cannot give a guaranteed - mathematically provable - upper bound for the execution time. This is somewhat desired for most applications as this increases throughput but for any real-time system one wants to be able to give such an upper bound that a given task will never exceed. Rather than executing something as fast as possible the aim is to **execute tasks consistently**, in a deterministic fashion: What matters is the **worst case latency** rather than the average latency. There are different approaches for rendering (Linux) operating systems real-time capable. These will be discussed in the next section. A introduction to real-time operating systems can be found [here](https://www.youtube.com/watch?v=4UY7hQjEW34) and [here](https://www.youtube.com/watch?v=w3yT8zJe0Uw). [This NASA conference paper](https://ntrs.nasa.gov/citations/20200002390) discusses challenges they encountered with Linux as a real-time operating system. In case you are interested in how the Linux kernel and scheduler works under the hood I recommend reading [this guide](https://wxdublin.gitbooks.io/deep-into-linux-and-beyond/content/index.html) written by someone who is much more knowledgable on this topic than I am. The topic also discusses load balancing on multi-core architectures.
0 commit comments