Thinking about performance and concurrency
This talk dives into the theory and practice of software performance. Our discussion will start with the fundamentals of computer subsystems and queueing theory: how cpu, disk, and network behavior can be described with queueing theory, the concepts of utilization and saturation, and how we can think about this behavior with the analytical model known as Little's Law. Next, we'll dive into the concept of concurrency, and why concurrency is increasingly important as the processor speeds predicted by Moore's law flatten. The behavior of different types of locking and the predictions of Amdahl's law will be discussed. Theory is no good without putting it to practice, so we'll apply the theoretical models in this talk to a small performance troubleshooting demo, utilizing standard Linux sysstat and perf toolsets. From the perspective of these lower-level performance tools, we'll see what insights can be gleaned about the performance of a process, and the common pitfalls inherent in measuring performance. This deep dive into software performance will conclude looking forward: whether it is microprocessors or data flying around between microservices, how can these concepts inform our design choices as software engineers, to build more performant and scalable software systems?