Monday, November 9, 2009

Headless Performance Testing


What is headless testing? It is any testing that does not have a GUI. We are all used to creating a performance script from recording a business process. This is accomplished by having a GUI for the business process that can be recorded. But do you always have a GUI to record? No, but many performance testers just refuse to test anything that doesn't. They say that without a GUI an app is not ready to be tested. I know most developers would disagree with that statement since they have to unit test the code many times without having a GUI.
So what happens? Performance testers say they won't test without a GUI, and developers generally don't have a testable GUI until the end of their release; so that means that performance testing will be done at the end. Hang on! I know that one of the biggest complaints by performance testing teams is that they are always at the end of the cycle and they never have enough time to test. Even if they do get the testing completed, if they find problems, the app gets released anyway because there was no time to fix the issues.
One of the biggest, if not the biggest, problem that performance testers have is that they are being relegated to the end of a release; yet they are perpetuating the problem by not testing earlier drops that do not have a GUI. This seems quite strange to me.
So what can performance testers do? Start doing headless performance testing. Start testing components before a GUI exists. The earlier you are testing, the quicker the problems are found, the more likely it is that the problems will be fixed and that means higher quality releases.

How to do it? For SOA there are products that will pull methods from WSDLs, and they will help you manually create scripts for the service. If it is not a service or a WSDL is not available, you can work with the developers to create test harness that can be used to create performance scripts. Many time the developers do not have to write a new test harness because they may have already written one to unit test their code or component.
Is a test harness enough? In some cases, yes. But in most cases, you will also need to employ stubs or virtual/simulated services. Stubs are pieces of code that "stub" entities which do not exist. When you are trying to test a service, you might not have a front end (GUI) and you probably do not have a backend to the service. The service may talk to other services, servers or databases. And if those backend entities do not exist, then you have to put something in place to simulate that backend so that the service you are attempting to test will function and react properly.
I've mentioned services quite bit. These seemingly innocent components are exacerbating the need for headless testing but are also allowing it to be more feasible. Before, with siloed applications, it was a problem to only test at the end; but hey that's what happened. Now with SOA, services are being deployed and many, many applications are utilizing them. It is no longer ok just to test one app to ensure that it will work as designed. Services need to be performance tested by themselves to fully stress their anticipated load. Just because a service seems to work with one application or another, all the applications combined that are using a single service may overly stress that service and cause it to fail.
The good news is that since these services should be well defined and encapsulated, it becomes possible to properly test them with a GUI--again, either by utilizing a WSDL or a test harness from developers. Headless testing will not only help ensure proper performance of an app in production, but it also enables testing earlier in the lifecycle. Before well-defined components, testing earlier was doable but so hard to do that most just refused. SOA allows early testing to become a reality.


Performance testers need to understand this new reality. It is time that we embrace the services and not just rely on developers to test them. We have been complaining for years that we need to start testing earlier and now, that it can become a possibility, we need to jump at the opportunity.

What will happen if performance testers don't perform headless testing? Well, performance testers will start to take on a smaller and smaller role. Developers will have to fill the void and "performance test" the services. We know that QA exists because developers can't test their code, but someone will need to test the services; and if the performance testers will not, the developers will have to. The developers will then claim that, since they have tested the parts, testing the whole is not that important. I have witnessed this happening today.
Is Business Process testing and End-2-End Testing going away? Of course not. They are important, and they will always be. Being able to test earlier will just allow many more performance issues to be found and corrected before applications get released. Testing individual services is needed because, many times, services are released before applications begin using them. I don't think anyone wants any component released into production without it being properly tested.


What have I been trying to say? Performance testers need to step up and start testing individual components. It may be difficult at first because it is a new skill that needs to be learned; however, once a tester gains that expertise, they will make themselves more relevant in the entire application lifecycle, gain more credibility with developers, and assist in releasing higher quality applications. Leaving component testing to developers will eventually lead to poorer quality applications being delivered or a longer delay in releasing a quality application. I can easily say this because QA was created to prevent the low quality apps that were being delivered. If developers were great testers then QA would not exist. That being said, QA can't abdicate their responsibility and rely on developers to test.

Headless testing: learn it, love it, performance test it.

Wednesday, August 19, 2009

Performance Testing Needs a Seat at the Table

It is time Performance Testing gets a seat at the table. Architects and developers like to make all the decisions about products without ever consulting the testing organization. Why should they? All testers have to do is test what's created. If testers can't handle that simple task, then maybe they can't handle their job.

I know this thought process well. I used to be one of those developers :). But I have seen the light. I have been reborn. I know that it is important to get testers input on products upfront. And actually it is becoming more important now than ever before.

With RIA (Web 2.0) technologies there are many different choices that developers can make. Should they use Flex, Silverlight, AJAX, etc... If they use AJAX, which frameworks should they use. If Silverlight then what type of back end communication are they going to use?

Let's just take AJAX as an example. There are hundreds of frameworks out there. Some are popular and common frameworks but most are obscure or one off frameworks. Developers like to make decisions on what will make their life easier and what is cool. But what happens if their choices can't be performance tested? Maybe the performance team doesn't have the expertise in-house, or maybe their testing tools don't support the chosen framework. What happens then?

I can tell you that many times the apps get released without being tested properly and they just hope for the best. It's a great solution. I like the fingers crossed method of releasing apps.

How could this situation be avoided? Simply include the performance testing group upfront. Testing is a major cog in the application life cycle. They should be included at the beginning. I'm not talking about testing earlier in the cycle (although that is important and it should be done). I'm talking about getting testing involved in architecture and development discussions before development takes place.
If developers and architects knew up front that certain coding decisions would make it hard or impossible to performance test, then maybe they would choose other options for the application development. Many businesses would not want to risk releasing an application if they knew that it could be tested properly. But when they find out too late, then they don't have a choice except to release it (the finger crossing method).

If the performance team knew upfront that they couldn't test something because of skills or tools, then at least they would have a heads-up and they could begin planning early for the inevitable testing. Wouldn't it be nice to know what you are going to need to performance test months in advance? No more scrambling at the 11th or 12th hour.

Think about this. If testing was invloved or informed upfront, then maybe they, along with development, could help push standards across development. For example, standardizing on 1 or 2 AJAX frameworks would help out testing and development. It will make the code more maintainable because more developers will be able to update it and it will help ensure that the application is testable.

We need to get more groups involved up front. The more you know, the better the decisions, the better off the application is going to be.





















Thursday, August 13, 2009

What are the Goals of Performance Testing?

So what is the point of performance testing? I get this question often. And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.
  • Writing a great script
  • Creating a fantastic scenario
  • Knowing which protocols to use
  • Correlating script data
  • Data Management
  • Running a load test

This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.

So why DO people performance test? What are the goals?

  • Validating that the application performs properly
  • Validating that the application conforms to the performance needs of the business
  • Finding, Analysing, and helping fix performance problems
  • Validating the hardware for the application is adequate
  • Doing capacity planning for future demand of the application

The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...

  • How many people really analyse the data from a performance test?
  • How many people use diagnostic tools to help pinpoint the problems?
  • How many people really know that the application performs to the business requirements?
  • How many people just test to make sure that the application doesn't crash under load?

Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.

  • Analysing the data is too hard.
  • If the application stays up, isn't that good enough?
  • So what if it's a little slow?

These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.

Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.

Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.

I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :). Sometimes people exaggerate their skills :) ).

Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.

It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.

By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.

As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :) ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :). But I digress.

Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

Wednesday, July 8, 2009

HP Software Universe (HPSU): Performance Highlights

I want to thank all who came this year and made HPSU successful. Also I hope those that were not able to come this year will be able to attend next year in Washington D.C. Yes! Next year HPSU will not be in Vegas.

My week started out on Monday with our Customer Advisory Board (CAB). This all day session with 9 of our best performance customers was great. We received a lot of feedback with our current releases and some good ideas for the future.

There were 3 themes that I noticed throughout most of my meetings, whether in the CAB, One on Ones, or round tables, customers are looking for:
  1. Better ways of testing Rich Internet Applications (RIA)
  2. Getting developers or scrums involved in the testing process
  3. Ways of thanking us for the WAN emulation. (Really customers are loving this)

When it comes to the new web sites, RIA is the winner. For some reason companies want better end user experiences. So with these new technologies comes performance testing challenges. We understand. We have been coming out with protocols and features to help tackle these challenges, such as AJAX Click & Script, and our Flex protocol. But we also understand that this is not enough. You guys are looking for easier and more scalable solutions. I can't say what we are doing, but I can tell you that we are on it and we are making fantastic progress. I've seen some of the early work and you guys will LOVE what we are coming out with (some time in the future :) ).

So now developers are trying to get into the testing game. I have heard where where some customers are saying just let the developers performance test the app. There is no need for a performance testing team. I don't get this. Would anyone ever want to get rid of the functional QA? Do people think that Functional QA does not provide value? Of course not. So why is there this push for developers to do all performance testing? I understand that they may want to do performance sanity checks. That makes sense, but to give all the performance testing to developers makes as much sense as having them do all the functional testing. You can't have the fox guard the hen house. That just will not work overtime. Ok, I'm done with my rant.

Let's get back to development performance testing. There is a need in Agile, SOA, and for sanity purposes to start performance testing earlier and more often. Hearing this over and over again at HPSU was very interesting feedback. Over the past year or so we have noticed Performance Center is more and more being utilized by development communities. Unlike other testing tools including LoadRunner, the developers do not have to have their own load testing machines. They do not need to have controllers or load generators. All they have to do is create scripts, upload them to PC and then run the tests on the performance infrastructure that exists. All they need is intranet access and a log in. To be honest, this wasn't one of the use cases for creating Performance Center but this is a benefit that has been popping up recently with many of our customers. And it does correlate with what we heard at HPSU this year. Developers have different choices for performance testing, but when they use PC those scripts can then be reused by the performance team. This reduces the time is takes to develop scripts by allowing reuse and brings the performance testers closer to the developers. And really we are all about bringing people closer together.

Finally, we heard many people talking about WAN emulation. In case you don't know, LoadRunner and Performance Center have an integration with WAN emulation software created by Shunra. We released this integration in our latest version 9.5 and the feedback has been wonderful. Customers are telling us that they can now do types of testing that they have never been able to do or they can do it so much easier. With this integration all the setup of the WAN emulation can be done through LR or PC. You do not have to work with the Shunra application directly. Now you can emulate usage from all over the world with the load generators that you have in your test lab. Shunra's WAN emulator is sold separately and only through Shunra. But if you need to understand system performance from different points on the globe and you don't have the LGs there, then this integration is something that you need to look into. And from what I heard at HPSU, there are many of you that need to look into it.

It was a good week. Actually better than expected. I thought because of the economy we wouldn't have as lively of a show. But I was wrong (this is a rarity). I thrive on getting feedback and information from customers. At events like this I get so much information. But of course it is never enough. I would like you to also provide me feedback on my products. If you would like to, you can always email me at sfeloney@hp.com. Also if you would like to set up a phone or face to face meeting just let me know.

Tuesday, May 12, 2009

ROI: You Get What You Pay For

We've all heard that saying. But how many times do we really follow it? We have bought, ok I have bought, cheap drills, exercise machines, furniture, only to be sorry about when they break prematurely. Or you find a great deal on shoes only to have them fall apart on you while you are in a meeting with a customer. I'm not saying that happened to me, but I know how that feels.

Cheaper always seems like it's a better deal. Of course it's not always true. I can tell you that now I pay more for my shoes and I'm much happier for it :). No more embarrassing shoe problems in front of customers (not saying that it happened to me). In fact when my latest pair of shoes had an issue, I sent them back to the dealer and they mailed me a new pair in less than a week! That's service. You get what you pay for.

The same holds true for cars, clothes, hardware, repairs, and of course software testing tools. You knew I was going to have to go there.

I hear from some people that Performance Center is too expensive. I'm always amazed when I hear that. I'm not saying Performance Center is for everyone. If you don't need PC's features and functionality, it may appear pricey. If you are only looking for a simple cell phone, then a phone that connects to the internet and to your email and also has a touch screen may seem a little pricey. But if you need those features then you are able to see the value in those features.

I can sit here, go through each unique feature in Performance Center and explain to you the value (Not saying that it will not come in a future blog :) ). But why would you listen to me, I'm the product manager. Of course I'm going to tell you that there is a lot of value to PC. Well IDC,a premier global provider of market intelligence and advisory services, has just released an ROI case study around HP Performance Center.

A global finance company specializing in real estate finance, automotive finance,
commercial finance, insurance, and online banking was able to achieve total ROI in 5.6 months. Yes! Only 5.6 months, not 1 or 2 years. But a total return on investment in 5.6 months. If anything I think we should be charging more for Performance Center :). This company did not just begin using PC, they have been using PC for the last 4 years. And during that time they have found a cumulative benefit of $24M. I'd say that got a lot more than what they were paying for. Not only did they see a 5.6 month return on the investment but they are seeing a 44% reduction in errors and a 33% reduction in downtime in production.

What gave them these fantastic numbers?

Increased Flexibility
By moving to Performance Center they were able to schedule their tests. Before PC they had controllers sitting around being idle while other controllers where in high demand based on the scenarios that were on them. But once they were able to start to schedule their tests, they began performing impromptu load tests and concurrent load tests. They started to see that they were able to run more tests with fewer controllers.

Increased Efficiency
While they are able to increase their testing output, they didn't increase their testing resources.
Their testers/engineers were able to more through PC than what they could do with any other tool.

Central Team
With the central team they were able to increase their knowledge and best practices around performance testing. By doing this along with performing more test, they were able to reduce their error rate by 44% and their production downtime by 33%.

So you get what you pay for. Put in the time and money. Get a good enterprise performance testing product. Invest in a strong central testing team. You will get more out, than what you put in. In the case of this customer they got $21M out over 4 years.

Also invest in good shoes. Cheap shoes will only cause you headaches and embarrassment (Not saying that it happened to me).

Wednesday, April 29, 2009

More About Performance Centers of Excellence

Earlier I wrote about the benefits of creating a performance CoE. I explained by making a move to a CoE you can increase your quality, increase your testing throughput, and decrease your overall spend. I know that this sounds too good to be true. But I'm telling you it's not. Don't just take my word on it. Theresa Lanowitz of voke, ran her own Market Snapshot on performance CoEs. Listen to her on May 19, 2009. She will go over all of her findings on CoEs. And since I'm telling you about it, the findings are good. :).


After you listen to this presentation, come back here and I'll talk about the steps you can take to begin building a CoE.



HP

voke Shares Research Findings

» ROI and performance goals with a performance testing shared service

Dear HP Valued Customer,

In today’s economic climate organizations are seeking ways to optimize resources and results. Part of the move towards optimization is the evolution and transformation of the application lifecycle, particularly the components that drive efficiency and quality.

To achieve business optimization through the application lifecycle, organizations are building centers of excellence (COE). A COE focused on performance is a strategic way to deliver customer satisfaction, high quality and resource savings.

Join Theresa Lanowitz, founder of voke Inc., as she presents the findings of a recent Market Snapshot on Performance Centers of Excellence. In this presentation, we will discuss:

Building a performance COE

Achieving performance COE ROI – qualitative and quantitative

Gaining organizational maturity through a performance COE

Realizing the benefits, results, and strategic value of a performance COE

Attendees will receive a copy of the voke Market SnapshotTM Report: Performance Center of Excellence.

Register Now »


Speakers:

Theresa Lanowitz, CEO & Founder – voke, Inc.

Priya Kothari, Product Manager, Performance Testing Solutions – HP Software + Solutions

PS: HP Software Universe 2009 registration is now open! Interested in learning the very latest about IT strategy, applications and operations? Then seize this opportunity to join us June 16-18, 2009 at The Venetian Resort Hotel Casino in Las Vegas, Nevada.

» Find out more.

voke Shares Research Findings

DATE:
Tuesday, May 19, 2009
START TIME:
10:00 am PT / 1:00 pm ET
DURATION:
60 minutes with Q&A

»

Register Now

Learn More

»

Cut costs (not corners) from your application lifecycle

»

Performance Center blog

»

Download a trial version of HP LoadRunner 9.5

Technology for better business outcomes



Sunday, April 5, 2009

Finding a Needle in a Haystack

The job of finding the proverbial needle in a haystack has always been a challenge. Digging through a haystack and struggling to find a needle is hard, if not near impossible. Plus you don't even know if there is a needle in there! After searching for while, I know (or at least I hope) that you'll be asking for some tools to help you out.



The first tool that I can think of is a metal detector. This makes sense. Use the metal detector to discover if there is a needle in the stack at all. If there isn't, then you don't have to waste your time looking for it. Yay!!! At least, now I know that I can quickly check stacks and only search through stacks that actually have a needle.

Is that the best or only tool that you can use? If you can cut the search time down more, wouldn't that good? Sure, unless you really like digging through that hay. What if you had a strong magnet? That would rock!!! First, use the metal detector to make sure that there is a needle; then bring in the strong magnet and voila! Out comes the needle, and you're done! No more searching required, and your job just got a whole lot easier.

As you may have already guessed, I'm not here to discuss haystacks. They are fun and all but not really the point. Let's tie this back to performance testing. That's more fun than haystacks anyway.

In the beginning, people tried to performance test manually with lots of people attacking an application at the same time. This was found to be time consuming, not repeatable, and not that beneficial, like digging through that hackstack by hand.

Then came automated load testing with monitoring. Now, there's a repeatable process and monitors to help discover if there were problems. This is a tremendous time saver and helps to ensure the quality of apps going to production--the metal detector, if you will.

Most of you know and love the metal detector, as well you should. But it is about time to be introduced to the strong magnet of performance testing. Say hello to Diagnostics (Diag). Some call it deep-dive; others call it code level profiling. I call it the time saver. Diagnostics will pinpoint (or should I say needle point :-) ) problems down to the method or sql statement level. That is huge! Now you have much more information to take back to developers. No longer just transaction level information, now you can now show them the method or sql statement that is having the problem within the transaction. This slashes the time it takes to fix problems. Diagnostics has agents on the application under test (AUT) machines. It then hooks the J2EE and/or .NET code. This is how it can show where there are bottlenecks. Highlighting slow methods and sql statements are good, but being able to tie them back to a transaction is even better.

From personal experience, I can tell you that Diagnostics works extremely well. Back at good old Mercury Interactive, we had a product that was having some performance issues. Our R&D had been spending a few weeks looking for the problem(s). Finally, I approached them and asked if they had tried our Diagnostics on it. They of course said no; otherwise this would be a counterproductive story. After I explained to them in detail, the importance and beauty of Diagnostics, they set it up. Within the first hour of using Diagnostics, they found multiple problems. The next day, all of them were fixed. R&D went from spending weeks looking for the needles that they couldn't find to finding them within an hour with the magnet I gave them. And now they use always use Diagnostics as part of their testing process.

I've heard people complain that it takes too much time to learn and set up. First off, it isn't that complicated to set up. Secondly, yes it is something new to learn but it's not too difficult. Once you learn it, however, it is knowledge that you can use over and over again. It took time to learn how to performance test to begin with. Record, parameterize, correlate, and analyze all took time to learn. This is just one more tool (or magnet) in your tool belt. It will save you and the developers tremendous amounts of time.

Isn't Diagnostics something that IT uses in production? Yes it is! Both sides (testing and production) can gain valuable information from using Diag. And both for the same reason. Find the problem and fix it fast. Testers can always run the test again to see if they can reproduce a problem. But, in production, once the issue occurs, they want to be able grab as much information as possible so that they can prevent it from happening again. After production gets this information, they can pass it back to testing to have testing reproduce the issue. If performance group is using the same Diag as production, then it is easier to compare results. A dream of production and testing working together in harmony, but I digress.

I have said for years that if performance testers are not utilizing diagnostics, then they are doing a disservice to themselves and to the task. Stop digging through the haystack. Pick up that magnet and start pulling out those needles!

Click here to learn more

Monday, March 30, 2009

Cloud Assure



HP Unveils “Cloud Assure” to Drive Business Adoption of Cloud Services
Also expands HP Software-as-a-Service partner program to help resellers better service customers

PALO ALTO, Calif., March 31, 2009 – HP today announced HP Cloud Assure, a new Software-as-a-Service (SaaS) offering designed to help businesses safely and effectively adopt cloud-based services.
HP also introduced updates to its HP SaaS reseller program that allow partners to provide added services and value to customers.
Cloud computing is a term used to describe services that can be delivered and used over the Internet through an “as-needed, pay-per-use” business model.
The promise of cloud computing is appealing because it can reduce business costs and provide greater flexibility and scalability of services throughout the enterprise. However, IT organizations unable to ensure the security, performance and availability of the cloud services they provide or consume may be putting their businesses at risk. This uncertainty can be an impediment to wider adoption of cloud services by enterprises.
“Whether providing or consuming cloud services, enterprises should be as ready to manage the risks as they are to reap the significant rewards of the cloud model,” said Frank Gens, senior vice president and chief analyst, IDC. “HP’s new offering is aimed at addressing what IDC has identified as the top three concerns of cloud computing among enterprises – security, performance and availability.”

HP Cloud Assure consists of HP services and software, including HP Application Security Center, HP Performance Center and HP Business Availability Center, and is delivered to customers via HP SaaS. HP also provides customers with a team of expert engineers that performs security scans, executes performance tests and deploys availability monitoring.
HP Cloud Assure helps customers validate:
· Security – by scanning networks, operating systems, middleware layers and web applications. It also performs automated penetration testing to identify potential vulnerabilities. This provides customers with an accurate security-risk picture of cloud services to ensure that provider and consumer data are safe from unauthorized access.
· Performance – by making sure cloud services meet end-user bandwidth and connectivity requirements and provide insight into end-user experiences. This helps validate that service-level agreements are being met and can improve service quality, end-user satisfaction and loyalty with the cloud service.
· Availability – by monitoring cloud-based applications to isolate potential problems and identify root causes with end-user environments and business processes and to analyze performance issues. This allows for increased visibility, service uptime and performance.

“HP helps Akamai validate to our enterprise prospects and customers the performance and availability improvements we provide in the cloud,” said Willie M. Tejada, vice president, Application and Site Acceleration for Akamai Technologies, a Massachusetts-based service provider for accelerating content and applications online. “HP SaaS confirms the benefits of our dynamic Web acceleration services and will be important to us as we continue to introduce additional functionality that helps businesses effectively optimize the cloud.”
HP Cloud Assure provides control over the three types of cloud service environments:
· For Infrastructure as a Service, it helps ensure sufficient bandwidth ability and validates appropriate levels of network, operating system and middleware security to prevent intrusion and denial-of-service attacks.
· For Platform as a Service, it helps ensure customers who build applications using a cloud platform are able to test and verify that they have securely and effectively built applications that can scale and meet the business needs.
· For Software as a Service, it monitors end-user service levels on the cloud applications, loads tests from a business process perspective and tests for security penetration.

“There is no question that cloud computing is providing a new set of opportunities for businesses, but it presents new risks as well,” said Scott Kupor, vice president, Software as a Service, HP. “With over nine years of SaaS experience and a leading portfolio of solutions in security, performance and availability, HP is uniquely positioned to help assure our customers can leverage the promise of cloud while removing risk from the equation.”

HP Cloud Assure is available today. More information about the offering is available at http://www.hp.com/go/cloudassure.
HP Software-as-a-Service partner program updates
Building on a reseller option introduced to partners last year, the expanded HP SaaS partner program now includes a partner-led delivery option.
The new option allows partners to provide specialized services, based on their areas of expertise, on top of the HP SaaS portfolio. The option helps partners:
· deliver services more quickly by leveraging the already-deployed HP SaaS solutions;
· focus on delivering high-value, high-margin consulting services for customers; and
· strengthen partnerships with customers through ongoing service.



“Our customers’ reduced IT resources have necessitated them to do more with less, but especially now, there is more strain to accelerate business value,” said Jeff Jamieson, co-founder and vice president of sales at Whitlock Infrastructure Solutions. “HP SaaS Partner-Led Delivery allows partners to leverage areas of expertise to deliver value-added consulting services.”
The partner-led delivery option is available currently for HP SaaS for Business Availability Center, HP SaaS for Performance Center and HP SaaS for Quality Center. Expansion to include the rest of the SaaS Business Technology Optimization software portfolio is under development.
Register to attend a live webcast
HP Cloud Assure Offering & HP SaaS Partner-Led Delivery Option
Tuesday, March 31, 2009 at 8 a.m. to 10 a.m. PT
About HP
HP, the world’s largest technology company, simplifies the technology experience for consumers and businesses with a portfolio that spans printing, personal computing, software, services and IT infrastructure. More information about HP (NYSE: HPQ) is available at http://www.hp.com/.
Note to editors: More news from HP, including links to RSS feeds, is available at http://www.hp.com/hpinfo/newsroom/.

This news release contains forward-looking statements that involve risks, uncertainties and assumptions. If such risks or uncertainties materialize or such assumptions prove incorrect, the results of HP and its consolidated subsidiaries could differ materially from those expressed or implied by such forward-looking statements and assumptions. All statements other than statements of historical fact are statements that could be deemed forward-looking statements, including but not limited to statements of the plans, strategies and objectives of management for future operations; any statements concerning expected development, performance or market share relating to products and services; any statements regarding anticipated operational and financial results; any statements of expectation or belief; and any statements of assumptions underlying any of the foregoing. Risks, uncertainties and assumptions include macroeconomic and geopolitical trends and events; the execution and performance of contracts by HP and its customers, suppliers and partners; the achievement of expected operational and financial results; and other risks that are described in HP’s Quarterly Report on Form 10-Q for the fiscal quarter ended January 31, 2009 and HP’s other filings with the Securities and Exchange Commission, including but not limited to HP’s Annual Report on Form 10-K for the fiscal year ended October 31, 2008. HP assumes no obligation and does not intend to update these forward-looking statements.


© 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

News releaseEditorial contacts: Scott Pace, HP+1 650 534 7439 scott.pace@hp.com Erin MuhlhanBurson-Marsteller for HP+1 312 596 3529 erin.muhlhan@bm.com HP Media Hotline+1 866 266 7272 pr@hp.comwww.hp.com/go/newsroomHewlett-Packard Company3000 Hanover StreetPalo Alto, CA 94304www.hp.com


Wednesday, March 18, 2009

Application Lifecycle Management Webcast

Next week Paul Ashwood and Bradd Hipps will be giving a webcast over on theserverside.com, presenting a whitepaper and case study Application Lifecycle Management (ALM). There will be some interesting details on how testing can be integrated into the lifecycle for application development and delivery to production. If your company or testing organization includes XXXXXXXXXXXXXXXXXX as part of the SDLC or Lifecycle processes, this might be very useful information and I'm sure Brad would love to hear some questions from you.


Here's the official abstract:


"Effective companies are riding the latest waves in application modernization. These waves touch nearly all of IT from technology and staffing to application architectures and release strategies.


HP Application Lifecycle Management (ALM) solutions help your IT organization make the most of these trends and avoid being swamped in the process. ALM from HP is an integrated suite of leading solutions that enables your IT leaders to answer comprehensively the key questions business stakeholders have regarding application modernization.


You are cordially invited to view HPs unique perspective on what ALM is and how our solution renders better business outcomes. All attendees will receive our new white paper, “Redefining the application lifecycle: Looking beyond the application to align with business goals.”


Click here to register

Monday, March 16, 2009

PC 9.5 Webinare March 16, 2009 with Nationwide

HPWebinar with Nationwide

HP Software recently released the 9.5 versions of HP LoadRunner software and HP Performance Center software. With these new releases, HP is addressing today's top of mind application challenges around rapid technology change and adoption of new processes. In this challenging economic environment, companies have to be as lean and agile as possible.



Please join HP and Nationwide for an informative Webinar where you will hear the highlights of the latest release as well as get a preview of an actual implementation of HP Performance Center 9.5 from John Seling of Nationwide.



You will hear:


-What new capabilities are included in the 9.5 release

-How Nationwide leveraged the new features in 9.5 to shorten their test timeframes and make their tests more realistic

-More about the new integration with Shunra Software


Join us to find out how HP's performance validation solutions can enable you to achieve legendary QA projects. All attendees will receive our new white paper, "Innovations in enterprise-scale requirements management, quality planning, and performance testing".


Register Now »


DATE: March 17, 2008

TIME: 10:00 a.m. PT / 1:00 p.m. ET

SPEAKERS:
John Seling, Performance & Data Engineering Manager, Nationwide

Priya Kothari, Product Marketing Manager (LoadRunner & Performance Center), HP Software


DURATION: 60 minutes with Q&A

REGISTER: Click here to register for the webinar

Sunday, March 15, 2009

Offshoring / Outsourcing Performance Testing

There are 2 main reasons why companies utilize offshoring (outsourcing) for performance testing.


The main reason is cost savings. Obviously companies will try to choose locations where there is a lower cost of doing business. The 2nd reason is to be able to ramp up new testers quickly. If there is a greater demand for testing than what the current set of testers can handle then offshoring or outsourcing can be utilized to quickly gain more testers to help with the excess demand.


In an ideal world all performance testers are the same. If you can find cheaper testers elsewhere, then you will get immediate cost savings. But as we know we do not live in an ideal world. There are different levels of knowledge, skill and motivation. We have seen time and time again offshoring fail because companies do not have that correct expectations, they do not set up the proper training, and they do not have the correct set of tools.


You cannot assume (we all know what that does) that just contracting with a secondary company to provide all or partial performance testing will automatically start showing benefits.


There is no reason why offshoring cannot be a successful venture for companies. They must research the offshoring options and find ones that have a good fit with skill sets, low "turn over", and a proven record.


Once an outsourcing company has been chosen then there has to be training. They must understand how your company is expecting the testing to be performed. They must know what types of tests you want them to do (stress,load,failover, etc...), the kind of reports that you want and the SLAs that you expect them to achieve.


After you have chosen the team, provided the appropriate training and expectations, what is left? What tools are they going to be using? The same set of tools that you used when the entire team was internal? At first this seems like the correct response. If it worked internally, why wouldn't it work for an outsourcer? Let's explore this for moment.


First let's just talk licenses. How is the outsourcing group going to gain licenses. Do they have their own licenses that they can use? Most do not and they rely on the company to provide that. So do you transfer the licenses that you have internally to the outsourcer? Do you want to keep some of the licenses in house so that you can perform tests internally when it is needed? More than likely you will be keeping at least some of your performance testing licenses in-house. So that means that you will have to buy more licenses for the outsourced team. Can your current testing tool help with this?


What about testing machines? Do you need to get more controllers and load generators? Can the outsourced team utilize the machines that you currently have? Can your current testing tool help with this?


What about management? How do you know that the outsourced team is doing what they are supposed to do? How do you know if the tests that they are creating are correct? How do you know if they are testing enough? In short how do you know that they are doing a good job? Lack of proper management and oversight is one of the biggest reasons why offshoring fails. Can your current testing tool help with this?


What if you would like "follow the sun testing" or better collaboration with testing. Let's say that you have an important project that needs to get tested quickly. And the only way to get this done is to keep handing off the test to different testers around the world. So when one location is done for the day, a new tester can pick up where the last left off and continue with the testing. This becomes a real possibility with offshoring. A test can begin in-house and then shift to an outsourcer off-hours, thus decreasing the time it takes to get the results to the line of business. Can your current testing tool help with this?




HP Performance Center (PC) is the enterprise performance testing platform that can help you with your offshoring/outsourcing needs. Let's start from the top. PC has shared licenses. Anyone around the world, that is given the proper permission, can access the PC web interface and run tests. There is no need for more licences unless there is a demand for more concurrent testing. And if your demand for more simultaneous tests is growing, then you are doing something right.


Now let's move on to machines. With Performance Center all the machines (controllers and load generators) are centrally managed. There is no need to have LGs placed throughout the world. Testers, worldwide, have access to tests and machines through PC. Again the only time that more machines are needed is if the demand increases. No need to buy more machines just because you have outsourced the job.


Performance Center was created for performance testing management. From a single location you can see what projects and tests have been created, how many tests have been executed and who ran them. There is no need to have scripts and reports emailed or copied. All testing assets are stored centrally and accessible through PC's web interface.


Not only can you view the projects, scripts, and results, you can also manage the testing environment itself. You can run reports to see what the demand for your testing environment is an then plan for increases accordingly.


How about "follow the sun" testing? With HP Performance Center anyone with proper access can take over testing. Since all scripts, scenarios, and profiles are completely separated from controllers can stored centrally, it is easy for a new tester to pick up where a previous tester left off. There is no need to upload scripts and scenarios to a separate location, or remember to email them to the next tester. It is all available 24x7 through Performance Center.


Collaboration on testing becomes much easier in PC than with almost any other tool. If you need different people at different locations to all watch a test as it is running, PC can accommodate that. Just log on to the running test and choose the graphs that you are interested in. Now all viewers are watching the test with the information that they are interested in, all through one tool.


HP Performance Center is your best performance testing platform choice when it comes to offshoring and outsourcing.


So after you pick the correct outsourcing company, and properly train them, make sure that you use HP Performance Center to ensure the highest cost savings and highest quality.