AlertSite-Blog-CTA-Header-2-1

Synthetic and Real-User Monitoring: Better Together

synthetic-rum-better-together

In my last post I provided a high level overview of Real User Monitoring (RUM) — what it is, how it works and where it fits into your overall performance strategy. For those of you new to the idea of application performance management, like anything, different approaches have their respective pros and cons. While a passive approach like RUM (passive because you can only generate measurements from users actually visiting and interacting with your site and applications) has it’s benefits it also has some glaring drawbacks.

The reality is that RUM’s greatest asset is also arguably it’s greatest weakness — it only works if people are visiting and using your site. While RUM offers incredible insight via the ability to monitor, capture and analyze every user interaction on your website — you still need real traffic for it to work.

What happens when you want to test a new application or site upgrade before it goes live? Maybe you’re concerned that a provider isn’t meeting their SLA on an application that receives less traffic than the rest of your site? How do you test site performance in geographies where you receive little to no traffic? Do you wait for that international marketing campaign to flood the site with UK visitors only to discover that the site performs horribly in Europe? These are all performance issues that RUM simply can’t address — and where synthetic (also referred to as active) monitoring can fill in the gaps.

Where synthetic monitoring really shines is in its ability to reliably measure website performance from select destinations, and across pre-defined user paths, at regular intervals. The basic idea behind synthetic monitoring is to ensure that your Web properties and key user transactions are always performing properly — even when there is no real user traffic coming through the given site or application. But how does it work?

Synthetic monitoring is generally built around a browser in the cloud that is run out on a remote server — giving you the ability to monitor performance from specific geographic locations. Being that synthetic monitoring begins in the cloud, tests generally originate from fairly controlled, reliable environments with dedicated hardware.

Additionally, synthetic monitoring will often provide the ability to test and monitor from different browser or rendering agents — depending on the provider these may be actual versions of today’s most popular browsers or pseudo-browsers, developed by the provider to closely emulate Chrome, FireFox and IE.  This isn’t true for every solution, but most operate in this way.

The basic workflow for a synthetic test begins with defining what you want to test. This could be something as simple as pinging a page to executing a particular transaction within an application. Your provider then sends your test out to the agent, using the browser and geographic location specified — in some instances you may not need to test against multiple browsers or locations, in which case your test will most likely be run using the provider’s default options.

The agent then requests your website, just like a normal browser would, and your site responds with the page or transaction. The difference is that as the agent receives the requested page or transaction back it also captures and measures performance data like response and load times. This data is then sent back to your provider where it is aggregated and sorted so that you can effectively analyze it.

Results are typically displayed using a waterfall chart, like the one below, that is essentially a visual representation of every request the page or transaction makes, in order, over the total execution time. This provides an easy way to quickly identify any performance bottlenecks, like the one highlighted in the red box.

rum-synthetic-waterfall

The monitoring aspect of synthetic testing begins when these tests are run at regular intervals so that you can baseline site performance; identify any issues and target areas for optimization. The information returned often depends on the provider you select, however, most of today’s Application Performance Management (APM) providers offer an incredible granular breakdown of the page load process and even delve into perceived user metrics and how these timings affect the overall user experience.

After this quick overview you can probably start to see how synthetic monitoring alone may fall short in some areas — mostly stemming from the fact that these tests are generated through controlled and predictable environments. They just aren’t truly representative of what real users are experiencing at any given time. However, they do provide you consistent, regular and reliable insight around your website and application performance without the need for actual site traffic, and in doing so, allows you to manage performance to a baseline.

To put it in a sports analogy, synthetic monitoring is your prevent defense — performance issues can still get through, but you are rarely going to give up the big play. On the other hand, RUM is your man-to-man coverage — providing you with unrivaled insight into the performance delivered to each and every user. The fact is that these two technologies are incredibly complimentary.

At the risk of taking my comparison too far, employing both RUM and synthetic monitoring together is the equivalent of playing 22 men on your defense. Even if the opponent has Peyton Manning at QB, he’s probably not throwing a touchdown . . . you might even keep him out of field goal range (if you’re the Seahawks, that is).

See also:

subscribe-2

Comments

  1. Looks like the story is cut off in the middle – where’s the chart it refers to?

    • Baustin213 says:

      You are correct, it looks like the last couple of paragraphs got cut off. Should be all set now. Thanks for the comment!

Speak Your Mind

*