In this article, our engineering team explains how we test the performance (speed) of our application to ensure it responds quickly when our customers use its features. We’ll delve into our methods for automatically spotting issues and analysing long-term data trends to maintain a consistently high-quality user experience.
Before diving into performance testing, we prioritised understanding our users’ experience. By analysing key user journeys, we identified mission-critical API calls and their impact on user satisfaction. This understanding shaped our performance testing focus, targeting APIs and processes that directly influence the user experience.
To simulate real-world scenarios, we developed tools to stage high volumes of data within our solutions. This pre-testing preparation mirrors actual usage patterns, ensuring our performance tests are both realistic and impactful.
Every night, our release pipeline runs a comprehensive suite of tests designed to validate the end-to-end functionality, stability, performance and usability of our platform.
This meticulous pipeline ensures an idempotent environment where results remain consistent regardless of previous states.
The Performance Testing Process
Our performance testing stage centres on measuring the efficiency of critical API routes, those that perform significant functions or manage heavy traffic. By Using Apache JMeter we:
All performance data is stored in Amazon S3 using HIVE architecture, AWS Glue, and Athena. This setup allows us to visualise long-term trends in AWS QuickSight. These insights are then reviewed regularly during developer discussions, enabling us to stay proactive.
The Importance of Long-Term Trends
The Importance of Long-Term Trends
While immediate alerts for our performance deviations are invaluable, we find they only tell part of the story. Long-term trends highlight gradual changes, for instance, an API slowly becoming less efficient over time. Spotting these patterns early allows us to address potential issues before they escalate, reducing costly interventions down the line.
Spending time with customers, particularly face-to-face, remains an irreplaceable practice, especially in the post-Covid era. It fosters deeper connections and provides clarity on user behaviours that virtual interactions may not fully capture. Many of our most successful innovations have stemmed from on-site visits. Discussing challenges and witnessing how users interact with software in real environments can spark creative, customer-centric solutions that advance the platform.
Our performance testing strategy is evolving. Here’s what’s next on our roadmap:
The following process outlines the steps executed within an Azure DevOps release pipeline that our team utilises to ensure effective performance testing and evaluation:
Our Conclusion
By integrating performance testing into our pipelines, we’ve established a robust approach that ensures our platform consistently delivers the experience our users expect. Combining tools like Apache JMeter, AWS QuickSight, and ADO Pipelines has been transformative, enabling us to monitor, analyse, and improve performance effectively.
Mateo Nores, Sam Williams