Monday, November 9, 2009

Headless Performance Testing


What is headless testing? It is any testing that does not have a GUI. We are all used to creating a performance script from recording a business process. This is accomplished by having a GUI for the business process that can be recorded. But do you always have a GUI to record? No, but many performance testers just refuse to test anything that doesn't. They say that without a GUI an app is not ready to be tested. I know most developers would disagree with that statement since they have to unit test the code many times without having a GUI.
So what happens? Performance testers say they won't test without a GUI, and developers generally don't have a testable GUI until the end of their release; so that means that performance testing will be done at the end. Hang on! I know that one of the biggest complaints by performance testing teams is that they are always at the end of the cycle and they never have enough time to test. Even if they do get the testing completed, if they find problems, the app gets released anyway because there was no time to fix the issues.
One of the biggest, if not the biggest, problem that performance testers have is that they are being relegated to the end of a release; yet they are perpetuating the problem by not testing earlier drops that do not have a GUI. This seems quite strange to me.
So what can performance testers do? Start doing headless performance testing. Start testing components before a GUI exists. The earlier you are testing, the quicker the problems are found, the more likely it is that the problems will be fixed and that means higher quality releases.

How to do it? For SOA there are products that will pull methods from WSDLs, and they will help you manually create scripts for the service. If it is not a service or a WSDL is not available, you can work with the developers to create test harness that can be used to create performance scripts. Many time the developers do not have to write a new test harness because they may have already written one to unit test their code or component.
Is a test harness enough? In some cases, yes. But in most cases, you will also need to employ stubs or virtual/simulated services. Stubs are pieces of code that "stub" entities which do not exist. When you are trying to test a service, you might not have a front end (GUI) and you probably do not have a backend to the service. The service may talk to other services, servers or databases. And if those backend entities do not exist, then you have to put something in place to simulate that backend so that the service you are attempting to test will function and react properly.
I've mentioned services quite bit. These seemingly innocent components are exacerbating the need for headless testing but are also allowing it to be more feasible. Before, with siloed applications, it was a problem to only test at the end; but hey that's what happened. Now with SOA, services are being deployed and many, many applications are utilizing them. It is no longer ok just to test one app to ensure that it will work as designed. Services need to be performance tested by themselves to fully stress their anticipated load. Just because a service seems to work with one application or another, all the applications combined that are using a single service may overly stress that service and cause it to fail.
The good news is that since these services should be well defined and encapsulated, it becomes possible to properly test them with a GUI--again, either by utilizing a WSDL or a test harness from developers. Headless testing will not only help ensure proper performance of an app in production, but it also enables testing earlier in the lifecycle. Before well-defined components, testing earlier was doable but so hard to do that most just refused. SOA allows early testing to become a reality.


Performance testers need to understand this new reality. It is time that we embrace the services and not just rely on developers to test them. We have been complaining for years that we need to start testing earlier and now, that it can become a possibility, we need to jump at the opportunity.

What will happen if performance testers don't perform headless testing? Well, performance testers will start to take on a smaller and smaller role. Developers will have to fill the void and "performance test" the services. We know that QA exists because developers can't test their code, but someone will need to test the services; and if the performance testers will not, the developers will have to. The developers will then claim that, since they have tested the parts, testing the whole is not that important. I have witnessed this happening today.
Is Business Process testing and End-2-End Testing going away? Of course not. They are important, and they will always be. Being able to test earlier will just allow many more performance issues to be found and corrected before applications get released. Testing individual services is needed because, many times, services are released before applications begin using them. I don't think anyone wants any component released into production without it being properly tested.


What have I been trying to say? Performance testers need to step up and start testing individual components. It may be difficult at first because it is a new skill that needs to be learned; however, once a tester gains that expertise, they will make themselves more relevant in the entire application lifecycle, gain more credibility with developers, and assist in releasing higher quality applications. Leaving component testing to developers will eventually lead to poorer quality applications being delivered or a longer delay in releasing a quality application. I can easily say this because QA was created to prevent the low quality apps that were being delivered. If developers were great testers then QA would not exist. That being said, QA can't abdicate their responsibility and rely on developers to test.

Headless testing: learn it, love it, performance test it.

Wednesday, August 19, 2009

Performance Testing Needs a Seat at the Table

It is time Performance Testing gets a seat at the table. Architects and developers like to make all the decisions about products without ever consulting the testing organization. Why should they? All testers have to do is test what's created. If testers can't handle that simple task, then maybe they can't handle their job.

I know this thought process well. I used to be one of those developers :). But I have seen the light. I have been reborn. I know that it is important to get testers input on products upfront. And actually it is becoming more important now than ever before.

With RIA (Web 2.0) technologies there are many different choices that developers can make. Should they use Flex, Silverlight, AJAX, etc... If they use AJAX, which frameworks should they use. If Silverlight then what type of back end communication are they going to use?

Let's just take AJAX as an example. There are hundreds of frameworks out there. Some are popular and common frameworks but most are obscure or one off frameworks. Developers like to make decisions on what will make their life easier and what is cool. But what happens if their choices can't be performance tested? Maybe the performance team doesn't have the expertise in-house, or maybe their testing tools don't support the chosen framework. What happens then?

I can tell you that many times the apps get released without being tested properly and they just hope for the best. It's a great solution. I like the fingers crossed method of releasing apps.

How could this situation be avoided? Simply include the performance testing group upfront. Testing is a major cog in the application life cycle. They should be included at the beginning. I'm not talking about testing earlier in the cycle (although that is important and it should be done). I'm talking about getting testing involved in architecture and development discussions before development takes place.
If developers and architects knew up front that certain coding decisions would make it hard or impossible to performance test, then maybe they would choose other options for the application development. Many businesses would not want to risk releasing an application if they knew that it could be tested properly. But when they find out too late, then they don't have a choice except to release it (the finger crossing method).

If the performance team knew upfront that they couldn't test something because of skills or tools, then at least they would have a heads-up and they could begin planning early for the inevitable testing. Wouldn't it be nice to know what you are going to need to performance test months in advance? No more scrambling at the 11th or 12th hour.

Think about this. If testing was invloved or informed upfront, then maybe they, along with development, could help push standards across development. For example, standardizing on 1 or 2 AJAX frameworks would help out testing and development. It will make the code more maintainable because more developers will be able to update it and it will help ensure that the application is testable.

We need to get more groups involved up front. The more you know, the better the decisions, the better off the application is going to be.





















Thursday, August 13, 2009

What are the Goals of Performance Testing?

So what is the point of performance testing? I get this question often. And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.
  • Writing a great script
  • Creating a fantastic scenario
  • Knowing which protocols to use
  • Correlating script data
  • Data Management
  • Running a load test

This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.

So why DO people performance test? What are the goals?

  • Validating that the application performs properly
  • Validating that the application conforms to the performance needs of the business
  • Finding, Analysing, and helping fix performance problems
  • Validating the hardware for the application is adequate
  • Doing capacity planning for future demand of the application

The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...

  • How many people really analyse the data from a performance test?
  • How many people use diagnostic tools to help pinpoint the problems?
  • How many people really know that the application performs to the business requirements?
  • How many people just test to make sure that the application doesn't crash under load?

Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.

  • Analysing the data is too hard.
  • If the application stays up, isn't that good enough?
  • So what if it's a little slow?

These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.

Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.

Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.

I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :). Sometimes people exaggerate their skills :) ).

Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.

It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.

By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.

As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :) ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :). But I digress.

Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

Wednesday, July 8, 2009

HP Software Universe (HPSU): Performance Highlights

I want to thank all who came this year and made HPSU successful. Also I hope those that were not able to come this year will be able to attend next year in Washington D.C. Yes! Next year HPSU will not be in Vegas.

My week started out on Monday with our Customer Advisory Board (CAB). This all day session with 9 of our best performance customers was great. We received a lot of feedback with our current releases and some good ideas for the future.

There were 3 themes that I noticed throughout most of my meetings, whether in the CAB, One on Ones, or round tables, customers are looking for:
  1. Better ways of testing Rich Internet Applications (RIA)
  2. Getting developers or scrums involved in the testing process
  3. Ways of thanking us for the WAN emulation. (Really customers are loving this)

When it comes to the new web sites, RIA is the winner. For some reason companies want better end user experiences. So with these new technologies comes performance testing challenges. We understand. We have been coming out with protocols and features to help tackle these challenges, such as AJAX Click & Script, and our Flex protocol. But we also understand that this is not enough. You guys are looking for easier and more scalable solutions. I can't say what we are doing, but I can tell you that we are on it and we are making fantastic progress. I've seen some of the early work and you guys will LOVE what we are coming out with (some time in the future :) ).

So now developers are trying to get into the testing game. I have heard where where some customers are saying just let the developers performance test the app. There is no need for a performance testing team. I don't get this. Would anyone ever want to get rid of the functional QA? Do people think that Functional QA does not provide value? Of course not. So why is there this push for developers to do all performance testing? I understand that they may want to do performance sanity checks. That makes sense, but to give all the performance testing to developers makes as much sense as having them do all the functional testing. You can't have the fox guard the hen house. That just will not work overtime. Ok, I'm done with my rant.

Let's get back to development performance testing. There is a need in Agile, SOA, and for sanity purposes to start performance testing earlier and more often. Hearing this over and over again at HPSU was very interesting feedback. Over the past year or so we have noticed Performance Center is more and more being utilized by development communities. Unlike other testing tools including LoadRunner, the developers do not have to have their own load testing machines. They do not need to have controllers or load generators. All they have to do is create scripts, upload them to PC and then run the tests on the performance infrastructure that exists. All they need is intranet access and a log in. To be honest, this wasn't one of the use cases for creating Performance Center but this is a benefit that has been popping up recently with many of our customers. And it does correlate with what we heard at HPSU this year. Developers have different choices for performance testing, but when they use PC those scripts can then be reused by the performance team. This reduces the time is takes to develop scripts by allowing reuse and brings the performance testers closer to the developers. And really we are all about bringing people closer together.

Finally, we heard many people talking about WAN emulation. In case you don't know, LoadRunner and Performance Center have an integration with WAN emulation software created by Shunra. We released this integration in our latest version 9.5 and the feedback has been wonderful. Customers are telling us that they can now do types of testing that they have never been able to do or they can do it so much easier. With this integration all the setup of the WAN emulation can be done through LR or PC. You do not have to work with the Shunra application directly. Now you can emulate usage from all over the world with the load generators that you have in your test lab. Shunra's WAN emulator is sold separately and only through Shunra. But if you need to understand system performance from different points on the globe and you don't have the LGs there, then this integration is something that you need to look into. And from what I heard at HPSU, there are many of you that need to look into it.

It was a good week. Actually better than expected. I thought because of the economy we wouldn't have as lively of a show. But I was wrong (this is a rarity). I thrive on getting feedback and information from customers. At events like this I get so much information. But of course it is never enough. I would like you to also provide me feedback on my products. If you would like to, you can always email me at sfeloney@hp.com. Also if you would like to set up a phone or face to face meeting just let me know.

Tuesday, May 12, 2009

ROI: You Get What You Pay For

We've all heard that saying. But how many times do we really follow it? We have bought, ok I have bought, cheap drills, exercise machines, furniture, only to be sorry about when they break prematurely. Or you find a great deal on shoes only to have them fall apart on you while you are in a meeting with a customer. I'm not saying that happened to me, but I know how that feels.

Cheaper always seems like it's a better deal. Of course it's not always true. I can tell you that now I pay more for my shoes and I'm much happier for it :). No more embarrassing shoe problems in front of customers (not saying that it happened to me). In fact when my latest pair of shoes had an issue, I sent them back to the dealer and they mailed me a new pair in less than a week! That's service. You get what you pay for.

The same holds true for cars, clothes, hardware, repairs, and of course software testing tools. You knew I was going to have to go there.

I hear from some people that Performance Center is too expensive. I'm always amazed when I hear that. I'm not saying Performance Center is for everyone. If you don't need PC's features and functionality, it may appear pricey. If you are only looking for a simple cell phone, then a phone that connects to the internet and to your email and also has a touch screen may seem a little pricey. But if you need those features then you are able to see the value in those features.

I can sit here, go through each unique feature in Performance Center and explain to you the value (Not saying that it will not come in a future blog :) ). But why would you listen to me, I'm the product manager. Of course I'm going to tell you that there is a lot of value to PC. Well IDC,a premier global provider of market intelligence and advisory services, has just released an ROI case study around HP Performance Center.

A global finance company specializing in real estate finance, automotive finance,
commercial finance, insurance, and online banking was able to achieve total ROI in 5.6 months. Yes! Only 5.6 months, not 1 or 2 years. But a total return on investment in 5.6 months. If anything I think we should be charging more for Performance Center :). This company did not just begin using PC, they have been using PC for the last 4 years. And during that time they have found a cumulative benefit of $24M. I'd say that got a lot more than what they were paying for. Not only did they see a 5.6 month return on the investment but they are seeing a 44% reduction in errors and a 33% reduction in downtime in production.

What gave them these fantastic numbers?

Increased Flexibility
By moving to Performance Center they were able to schedule their tests. Before PC they had controllers sitting around being idle while other controllers where in high demand based on the scenarios that were on them. But once they were able to start to schedule their tests, they began performing impromptu load tests and concurrent load tests. They started to see that they were able to run more tests with fewer controllers.

Increased Efficiency
While they are able to increase their testing output, they didn't increase their testing resources.
Their testers/engineers were able to more through PC than what they could do with any other tool.

Central Team
With the central team they were able to increase their knowledge and best practices around performance testing. By doing this along with performing more test, they were able to reduce their error rate by 44% and their production downtime by 33%.

So you get what you pay for. Put in the time and money. Get a good enterprise performance testing product. Invest in a strong central testing team. You will get more out, than what you put in. In the case of this customer they got $21M out over 4 years.

Also invest in good shoes. Cheap shoes will only cause you headaches and embarrassment (Not saying that it happened to me).

Wednesday, April 29, 2009

More About Performance Centers of Excellence

Earlier I wrote about the benefits of creating a performance CoE. I explained by making a move to a CoE you can increase your quality, increase your testing throughput, and decrease your overall spend. I know that this sounds too good to be true. But I'm telling you it's not. Don't just take my word on it. Theresa Lanowitz of voke, ran her own Market Snapshot on performance CoEs. Listen to her on May 19, 2009. She will go over all of her findings on CoEs. And since I'm telling you about it, the findings are good. :).


After you listen to this presentation, come back here and I'll talk about the steps you can take to begin building a CoE.



HP

voke Shares Research Findings

» ROI and performance goals with a performance testing shared service

Dear HP Valued Customer,

In today’s economic climate organizations are seeking ways to optimize resources and results. Part of the move towards optimization is the evolution and transformation of the application lifecycle, particularly the components that drive efficiency and quality.

To achieve business optimization through the application lifecycle, organizations are building centers of excellence (COE). A COE focused on performance is a strategic way to deliver customer satisfaction, high quality and resource savings.

Join Theresa Lanowitz, founder of voke Inc., as she presents the findings of a recent Market Snapshot on Performance Centers of Excellence. In this presentation, we will discuss:

Building a performance COE

Achieving performance COE ROI – qualitative and quantitative

Gaining organizational maturity through a performance COE

Realizing the benefits, results, and strategic value of a performance COE

Attendees will receive a copy of the voke Market SnapshotTM Report: Performance Center of Excellence.

Register Now »


Speakers:

Theresa Lanowitz, CEO & Founder – voke, Inc.

Priya Kothari, Product Manager, Performance Testing Solutions – HP Software + Solutions

PS: HP Software Universe 2009 registration is now open! Interested in learning the very latest about IT strategy, applications and operations? Then seize this opportunity to join us June 16-18, 2009 at The Venetian Resort Hotel Casino in Las Vegas, Nevada.

» Find out more.

voke Shares Research Findings

DATE:
Tuesday, May 19, 2009
START TIME:
10:00 am PT / 1:00 pm ET
DURATION:
60 minutes with Q&A

»

Register Now

Learn More

»

Cut costs (not corners) from your application lifecycle

»

Performance Center blog

»

Download a trial version of HP LoadRunner 9.5

Technology for better business outcomes



Sunday, April 5, 2009

Finding a Needle in a Haystack

The job of finding the proverbial needle in a haystack has always been a challenge. Digging through a haystack and struggling to find a needle is hard, if not near impossible. Plus you don't even know if there is a needle in there! After searching for while, I know (or at least I hope) that you'll be asking for some tools to help you out.



The first tool that I can think of is a metal detector. This makes sense. Use the metal detector to discover if there is a needle in the stack at all. If there isn't, then you don't have to waste your time looking for it. Yay!!! At least, now I know that I can quickly check stacks and only search through stacks that actually have a needle.

Is that the best or only tool that you can use? If you can cut the search time down more, wouldn't that good? Sure, unless you really like digging through that hay. What if you had a strong magnet? That would rock!!! First, use the metal detector to make sure that there is a needle; then bring in the strong magnet and voila! Out comes the needle, and you're done! No more searching required, and your job just got a whole lot easier.

As you may have already guessed, I'm not here to discuss haystacks. They are fun and all but not really the point. Let's tie this back to performance testing. That's more fun than haystacks anyway.

In the beginning, people tried to performance test manually with lots of people attacking an application at the same time. This was found to be time consuming, not repeatable, and not that beneficial, like digging through that hackstack by hand.

Then came automated load testing with monitoring. Now, there's a repeatable process and monitors to help discover if there were problems. This is a tremendous time saver and helps to ensure the quality of apps going to production--the metal detector, if you will.

Most of you know and love the metal detector, as well you should. But it is about time to be introduced to the strong magnet of performance testing. Say hello to Diagnostics (Diag). Some call it deep-dive; others call it code level profiling. I call it the time saver. Diagnostics will pinpoint (or should I say needle point :-) ) problems down to the method or sql statement level. That is huge! Now you have much more information to take back to developers. No longer just transaction level information, now you can now show them the method or sql statement that is having the problem within the transaction. This slashes the time it takes to fix problems. Diagnostics has agents on the application under test (AUT) machines. It then hooks the J2EE and/or .NET code. This is how it can show where there are bottlenecks. Highlighting slow methods and sql statements are good, but being able to tie them back to a transaction is even better.

From personal experience, I can tell you that Diagnostics works extremely well. Back at good old Mercury Interactive, we had a product that was having some performance issues. Our R&D had been spending a few weeks looking for the problem(s). Finally, I approached them and asked if they had tried our Diagnostics on it. They of course said no; otherwise this would be a counterproductive story. After I explained to them in detail, the importance and beauty of Diagnostics, they set it up. Within the first hour of using Diagnostics, they found multiple problems. The next day, all of them were fixed. R&D went from spending weeks looking for the needles that they couldn't find to finding them within an hour with the magnet I gave them. And now they use always use Diagnostics as part of their testing process.

I've heard people complain that it takes too much time to learn and set up. First off, it isn't that complicated to set up. Secondly, yes it is something new to learn but it's not too difficult. Once you learn it, however, it is knowledge that you can use over and over again. It took time to learn how to performance test to begin with. Record, parameterize, correlate, and analyze all took time to learn. This is just one more tool (or magnet) in your tool belt. It will save you and the developers tremendous amounts of time.

Isn't Diagnostics something that IT uses in production? Yes it is! Both sides (testing and production) can gain valuable information from using Diag. And both for the same reason. Find the problem and fix it fast. Testers can always run the test again to see if they can reproduce a problem. But, in production, once the issue occurs, they want to be able grab as much information as possible so that they can prevent it from happening again. After production gets this information, they can pass it back to testing to have testing reproduce the issue. If performance group is using the same Diag as production, then it is easier to compare results. A dream of production and testing working together in harmony, but I digress.

I have said for years that if performance testers are not utilizing diagnostics, then they are doing a disservice to themselves and to the task. Stop digging through the haystack. Pick up that magnet and start pulling out those needles!

Click here to learn more