OVERVIEW
The Grinder is a great tool for doing load and performance testing. But there has never been a good set of tools available to integrate data generated by The Grinder with data from other sources. For example, a reasonable thing for a performance engineer to want to see might be a chart containing the transactions per second data generated by the grinder along with the server-side CPU data generated by vmstat. While not impossible to do, there have never been any standard tools or processes available to make this task easy.
I recently decided to work on that problem. I wanted to find a good way to tie Grinder data together with a wide variety of other available performance metrics. I was willing to write new code as needed, but started with a heavy bias for using existing tools. I came up with a solution that uses three different data-collection tools, with Graphite as the unifying back end and data visualizer.
1. GRAPHITE FOR VISUALIZATION
Graphite is a great tool for storing and visualizing time-series data collected from a variety of different sources.
1.1 Key Features
- Simplicity -- Graphite is always on. Your data collectors are always on. You no longer have to remember to start server monitoring along with your Grinder run, or write wrapper scripts that start your Grinder agent and your server monitoring at the same time. Just kick off your grinder test, then look at the results when you're done.
- It's easy to integrate data from a variety of sources into Graphite. For many common forms of data, tools to move it into Graphite have been in place for a long time.
- Graph creation is flexible and simple. In the Graphite UI it's easy to build new types of graphs interactively. And graphs can be generated programmatically using the Graphite API.
- It's possible to save dashboards with a pre-configured collection of your favorite graph types. This is handy, since it saves you from having to reconfigure your aggregate graphs after each Grinder run.
- Graphite is not the only tool available fo manage time series data, but it is in wide use. Lots of shops use Graphite, which makes it more likely you can find the help (or the tools) you need.
1.2 Why not Cacti or Ganglia?
Cacti and Ganglia are both great tools. In the place I work, our ops team uses ganglia extensively to monitor what’s going on in production, so I initially started this effort with a preference for using Ganglia as the back-end instead of Graphite.
Ganglia and Cacti are both built atop RRD. Unfortunately, RRD assumes all its incoming data is happening in real time. There are no good options for sending old, timestamped data to an RRD back-end. This rules it out for processing the non-realtime data contained in logs from completed Grinder runs.
Graphite is not built on top of RRD. It uses an alternate data storage layer named Whisper. Whisper was specifically designed to get around this issue, and to be able to store intermittent data. This is perfect for the Grinder, which produces data in separate blocks of time for each test.
For more information on Graphite (including documentation, download links, setup instructions, etc.) see the Graphite web site: http://graphite.wikidot.com/start
2. GATHERING GRINDER METRICS
Up until now, there has been no good way to get Grinder data into Graphite. This was the piece that had no pre-existing solution, and required a new tool to be written. What I came up with is Graphite Log Feeder, available under the GPL at https://bitbucket.org/travis_bear/graphitelogfeeder
Graphite Log Feeder (GLF) parses your Grinder data logs and forwards the performance data to a running instance of Graphite. As with the existing Grinder Analyzer tool, (http://track.sourceforge.net/) you have the option to specify a list of response time groupings. GLF runs in CPython, Jython, and pypy.
Once your Grinder data is imported into Graphite, you can use it to construct arbitrary graphs.
2.1 Example Graphs
Here are examples of graphs I just threw together in a few minutes. You are certainly not limited to what you see here; the number of possible ways to combine your data is vast, so with time and experimentation, you can come up with whatever presentation you need. In this test load was increasing steadily over time for half an hour.
2.2 GLF limitations
- With GLF you have no direct visibility into the test summary data generated for you at the end of the Grinder agent out_* file. For this, Grinder Analyzer is still your best bet.
- No easy way (that I have yet discovered) to zoom in the Graphite UI to the specific block of time where your test has run
- Although your OS and application-level metrics are available to Graphite in real-time (see below), GLF is only able to make your Grinder data available after your test run has completed.
2.3 Why not use Logster?
Before writing GLF, I assumed that I would be using Logster (https://github.com/etsy/logster) to transfer my Grinder data into Graphite. Unfortunately, when I started digging into Logstster, I discovered that it (like Cacti and Ganglia, see section 1.2, above) assumes all the data it processes is real-time. There is no support for ingesting old or timestamped data. This made it unsuitable for processing Grinder logs.
3. GATHERING OS-LEVEL METRICS
There are a variety of tools available for getting OS-level performance metrics (memory, disk use, CPU use) into Graphite.
3.1 quickstatd
Quickstatd is a realtime, bash-script based approach that has no external dependencies. It’s a good match in cases where you just want to get something simple up and running quickly. (Thus the name, quickstatd.) For additional detail and background, see my posting on quickstatd. For downloads and other info, see https://bitbucket.org/travis_bear/quickstatd
3.2 collectd
Collectd (with the graphite plugin) is a good choice for a production environment. It’s well-tested, with a robust feature set.
3.3 Example graphs
Here are graphs made from quickstatd metrics. The grinder test is the same one run in section 2.1 (above).
4. GATHERING APPLICATION-LEVEL METRICS
4.1 JAVA / JMXTRANS
Where I work, most of our servers are running on Java in Tomcat. Tomcat exposes a ton of information about its run state via JMX. We expose quite a bit of information in our own application code that we'd like to track as well.
We use jmxtrans (http://code.google.com/p/jmxtrans/) to capture these JVM-internal metrics, and forward them to Graphite. With this approach, we can look inside our running apps to see what's happening internally any time we want.
Here are some graphs of JMX statistics captured by jmxtrans. The examples here are of Tomcat metrics (memory use and thread counts) but they could just as easily be for anything your app exposes via JMX.
5. INTEGRATING THE DATA
With all the separate pieces described above up and running, we can go into the Graphite UI to mix-and-match our metrics, creating graphs of data from different sources. Here's a chart containing data from both The Grinder and vmstat.
It took about ten seconds to set that graph up. This kind of simplicity allows us to interactively correlate all kinds of different data. We are currently only scratching the surface of what's possible, and are very excited to see where this takes us.
6. BEST PRACTICES
There are a few things you can do with this collection of tools that will make your life a little easier.
6.1 Repeatability
In general, repeatability is good, whereas running with a wide variety of test scripts on an ever-changing mix of hardware is asking for a headache. Every single metric you generate, on each machine, generates a new tree-view item in the Graphite UI. This can clutter up your UI, and make it harder to find the data you want. When possible, avoid cycling a bunch of different hardware in and out of your environment. And when possible, avoid changing the names of the different transactions in your Grinder scripts.
Also, each new metric results in a new Whisper database file being created on your Carbon (Graphite) server. Depending on your data-retention settings, this can wind up a significant amount of space. In my environment, every metric results in a Whisper file of 73 MB, with over 90 GB of disk space dedicated to my relatively-small environment of 14 machines.
6.2 Time synchronization
Time synchronization among all the machines in your environment (preferably with NTP) is a must! Otherwise the data from the different machines in your charts will get out of alignment, and you won't be able to accurately visualize what's really going on.
7. POSSIBLE FUTURE EFFORTS
GLF gives Grinder users abilities they have never had before. And the setup I have described here is quite useful, today. But there are other desirable features that will require additional work to achieve. Depending on time and motivation, I may take a stab at implementing some of these things in the future.
7.1 Grinder run manager
Graphite runs as a Django app. Another app could run in the same Django instance to help with a variety of test-management tasks:
- let you zero you in immediately on the time range for a given test
- save metadata (goals, hosts, test type, notes, etc.) on individual Grinder runs
- Include some way to store and display the summary data at the end of the Grinder out_* file, similar to the way this information is displayed in Grinder Analyzer, with sortable columns, etc.