Some programs depend on a distinction between a null array and an empty array. What is often used to represent arrays in XML schemas does not have any such distinction. Is there anything you can do to get around this feature of XML? This article will show you.
When working with Web services, it is all too often assumed that anything that can be done in a programming language can be done in XML. There are many cases where that is not true. This tip addresses one of those cases: the distinction between an array which is null, and an array which has no elements.
An XML array
Most programming languages, like the Java language, have the concept of an array: a sequential collection of like elements. XML also has a sequential collection of like elements: an XML schema element with a maxOccurs attribute whose value is greater than 1. So it stands to reason that the Java language's sequential collection of like elements would map nicely to XML's sequential collection of like elements. Listing 1 defines a complexType which contains such an XML schema 'array'.
Listing 1. A complexType containing an 'array'
The problem
This XML "array" is not strictly an array. It is an element with an occurrence constraint, which means that the element is defined to occur a specific number of times, in this case 0 or more times. This does sound a lot like an array, and for most intents and purposes, it is. But the mapping isn't perfect. You should be aware of the shortcomings so they don't catch you by surprise.
Following the JAX-RPC mapping rules, the complexType in Listing 1 would become the Java bean in Listing 2 (actually, the bean would have getters and setters, but we'll keep it simple for this discussion).
Listing 2. A bean containing an array.
public class Bean {
public java.lang.String name;
public java.lang.Integer[] array;
}
(Note that Bean's array variable is an array of java.lang.Integer, not an array of int. The array element from the XML schema is nillable. A Java int cannot be null. A java.lang.Integer can be null. So we use java.lang.Integer in this mapping.)
number of examples of mapping an instance of the Java Bean to an instance of the corresponding XML. The first row is the Java representation; the second row is the corresponding XML representation.
One obvious thing to note about Table 1 -- and it's the topic of this tip -- is that an empty instance of a Java array and a null instance of a Java array map to the same XML instance. This is not good if you're depending on a distinction between the two.
One easy trap to fall into here is to guess that a null array inside a bean is really represented by the XML in the second column. But as we hope we've shown in the table, that really represents an array with a single element whose value is null, not a null array.
Is there a way around this issue?
Of course! The thing to be aware of is that an array in most programming languages is really made up of two things: there are the contents of the array and there is the array itself -- a wrapper, if you like, around the contents. An XML "array" is only a list of the elements. There is no wrapper.
So the solution is simple: create a wrapper for the array, as shown in Listing 3.
Listing 3. A bean containing a wrappered array.
As you can see, the empty instance and the null instance of the arrayWrapper complexType are distinct from each other.
This solution isn't a cure-all. First of all, it's rather more complex than a simple minOccurs/maxOccurs representation of an array. Secondly, instead of a simple bean containing an array, this XML schema really looks like a bean containing a bean containing an array; and that's likely what you'll end up with if you map this XML schema into Java programming with your favorite JAX-RPC WSDL-to-Java tool. Until standards bodies recognize and map this wrapped array pattern appropriately, this solution is something you should apply only if you really must distinguish a null array from an empty array.
Summary
XML "arrays" are not truly arrays in a programming language sense. XML does not distinguish between a null array and an empty array. There is an XML schema pattern that you can follow to get the equivalent distinction, but this pattern is not well recognized by standards bodies and should only be used when absolutely necessary.
Web services programming tips and tricks:
Testing Tools Interview Questions
What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.
What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
What are some recent major computer system failures caused by software bugs?
Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. Studies were under way to determine which, if any, portions of the project could be salvaged.
In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.
Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers, and that the total cost of the incident could exceed $100 million.
A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.
According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.
In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.
A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders.
News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.
In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that "...six miscues out of more than 400 trades does not indicate negligence." was invalidated.
In April of 2003 it was announced that a large student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.
News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.
In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.
A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.
News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.
In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's inability to handle credit cards with year 2000 expiration dates.
In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers."
In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human error.'
On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering software code was rewritten.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."
Why does software have bugs?
miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
programming errors - programmers, like anyone else, can make mistakes.
changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.
time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
egos - people prefer to say things like:
'no problem'
'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
If there are too many unrealistic 'no problem's', the
result is bugs.
poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
How can new Software QA processes be introduced in an existing organization?
A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for management to get serious about quality assurance?' Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.
XPLANNER
XPlanner is a project planning and tracking tool for eXtreme Programming (XP) teams. If you are not familiar with XP software development practices, the links page contains pointers to relevant resources. To summarize the XP planning process, the customers pick the features to be added (user stories) to each development iteration (typically, one to three weeks in duration). The developers estimate the effort to complete the stories either at the story level or by decomposing the story into tasks and estimating those. Information about team development velocity from the previous iteration is used to estimate if the team can complete the stories proposed by the customer. If the team appears to be overcommitted, the set of stories are renegotiated with the customer. The XPlanner tool was created to support this process and address issues experienced in a long-term real-life XP project.
This is very much a work in progress. We expect this tool to evolve as our and the software community's understanding of XP and other agile processes increases. If you'd like to the discuss the planning approaches supported by this tool or provide other feedback and suggestions there is a mailing list for that purpose or contribute to our wiki.
Features
Simple planning model
Virtual note cards
Support for recording and tracking projects, iterations, user stories, and tasks.
Smart continuation of unfinished stories (unfinished tasks copied, copied stories are crosslinked).
Distributed integration token (with email notification)
Online time tracking and time sheet generation at individual/team level
Metrics generation (team velocity, individual hours, ...)
Charts for iteration velocity, Scrum burn down, distribution of task types, dispositions, and more..
Ability to attach notes to stories and tasks (with attachments).
Iteration estimate accuracy view
Page showing task and story status for individual developers and customers.
Export of project and iteration information to XML, MPX (MS Project), PDF, and iCal formats.
TWiki-style text formatting support with support external tool integration and extensible wiki word linking.
Integrated, extensible authentication supports multiple projects with project-specific authorization.
SOAP interfaces for advanced XPlanner integration and extension.
Language support for English, Spanish, French, German, Italian, Brazilian Portuguese, Danish, Russian, Chinese,
and Japanese..
XPlanner Tips
Use a pseudo-iteration to store unplanned stories
XPlanner doesn't currently have direct support for an unplanned story container. However, most teams create a pseudo-iteration called something like "backlog" or "unplanned stories" with a start date far into the future. Unplanned stories are placed in this container and then moved to an iteration during the planning game.
Use Edit links in aggregate pages
In pages showing tables of objects, use the edit link on each row to edit objects rather than selecting the name link, editing the object and then navigating back to the aggregrate page. This will save many mouse clicks.
Use XSLT to convert XML export data to other formats
You can use an XSLT transform to convert the XML export to formats such as RTF (MS Word), Postscript, or static XHTML pages.
Buttons
#1. How to set the default button for a form?
Default Button of a form means that button on form whose click event fires when Enter key is pressed. To make a button on form as default set the form's AcceptButton property. You can do this either through the designer, or through code such as
form1.AcceptButton = button1;
#2. How to set the Cancel button for a form?
Cancel Button of a form means that button on form whose click event fires when ESC key is pressed. To make a button on form as Cancel set the form's CancelButton property. You can do this either through the designer, or through code such as
form1.CancelButton = button1;
#3. How to trigger a button click event?
In VB 6.0 it was possible to call CommandButton click event from anywhere like any other method or function (Sub). But in .NET it is not possible in same way. But .NET provides a very simple way to do this. Just use the button's public method PerformClick.
button1.PerformClick();
Alternative: The tip below is provided by kaminm
You can trigger a button (Web and Win) by calling Buttonclick with null parameters
btnClear_Click(null,null)
Alternative: The tip below is provided by Paul Brower
You can use it this way, if you're planning on doing something with the sender object, you have a reference to it.
button1_click(button1,EventArgs.Empty)
Combo Box
#1. How to fill a ComboBox with the available fonts?
comboBox1.Items.AddRange (FontFamily.Families);
Text Box
#1. How to disable the default ContextMenu of a TextBox?
To prevent the default context menu of a TextBox from showing up, assign a empty context menu as shown below:
textBox1.ContextMenu = new ContextMenu ();
#2. How to enter multiline text in textbox through code?
Sometimes it is needed to show data on different lines. The first idea that comes is to set MULTILINE Property to true and use '\n' escape sequence for this. But this escape sequence is not supported in .NET textbox. Still it very easy to overcome this problem. To assign multiline text at design time, in the designer window use the LINES property of TextBox control. For achieving this at runtime, create an array of string and assign it to LINES property of Textbox as shown below.
string [] strAddress = {"Mukund Pujari","Global Transformation Technologies","Pune, India"};
textBox1.MultiLine=true;
textBox1.Lines=strAddress;
Alternative: The tip below is provided by joelycat
.NET text boxes don't recognize \n but they do recognize \r\n. Try:
textBox1.Text="Line 1\r\nLine2\r\nLine3.";
Alternative: The tip below is provided by Robert Rohde
Actually "System.Environment.NewLine" should be used instead. This way you are platform independant.
Alternative: The tip below is provided by Redgum
simply use a "RichTextBox" for those areas on your form that require multiple lines
of randomly output text, and use a simple text box for those that do not.
#3. Some useful TextBox Validations
Numeric TextBox
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsDigit( e.KeyChar ) || char.IsControl( e.KeyChar ) ) )
{
e.Handled = true;
}
}
Numeric TextBox with Decimals
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsDigit( e.KeyChar) || char.IsControl( e.KeyChar ) ||(e.KeyChar== (char )46)) )
{
e.Handled = true;
}
}
TextBox Allowing Characters Only
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsLetter( e.KeyChar ) || char.IsControl( e.KeyChar ) ) )
{
e.Handled = true;
}
}
TextBox Allowing Upper Case Characters Only
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsUpper( e.KeyChar ) || char.IsControl( e.KeyChar )) )
{
e.Handled = true;
}
}
TextBox Allowing Lower Case Characters Only
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsLower( e.KeyChar ) || char.IsControl( e.KeyChar )) )
{
e.Handled = true;
}
}
Check For Unfilled TextBox
// Call this function and pass the Textbox as parameter to this function
public static bool ChkEmpty(params System.Windows.Forms.TextBox[ ] tb)
{
int i;
for (i = 0; i < tb.Length; i++)
{
if (tb.Text.Trim() == "")
{
MessageBox.Show("Don't keep field empty");
tb.Focus();
return false;
}
}
return true;
}
Localizing Validations - Country Specific Decimal Character
The tip below is provided by curt
Here he tells us, how different characters can be used for decimals depending upon the countries. For e.g. people in France may use character other than dot (.) for decimal point.
string DecimalSeparator = Thread.CurrentThread.CurrentCulture.NumberFormat.NumberDecimalSeparator;
private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if ( !( char.IsDigit( e.KeyChar) || char.IsControl( e.KeyChar ) || (DecimalSeparator.IndexOf(e.KeyChar) != -1 ) ) )
{
e.Handled = true;
}
}
DateTime Picker
#1. How to make the DateTimePicker show empty text if no date is selected?
Use following code in some Button Click event:
dateTimePicker1.CustomFormat=" ";
dateTimePicker1.Format=DateTimePickerFormat.Custom;
Data Grid
#1. How to remove the indeterminate status of checkbox in datagrid?
The checkbox in checkbox column of datagrid shows indeterminate status by default. To remove this behaviour set AllowNull property of the CheckBox column to false as below:
DataGridTableStyle ts1 = new DataGridTableStyle(); // Create New TableStyle
ts1.MappingName = "Items"; // Assign the name of Data Table to which Style is applied
DataGridColumnStyle boolCol = new DataGridBoolColumn(); // Create a CheckBox column
boolCol.MappingName = "ch"; // Assign the name of Data Column
boolCol.AllowNull=false; // This property actually removes the indeterminate status of checkboxes
#2. How to group columns in DataGrid?
Download source files - 9.57 Kb
Download demo project - 5.26 Kb
Hi friends, you may be knowing better ways of doing it, but I managed to find this solution in the time limit I had been given. The logic is that, while looping through datatable we save the values of current column and previous column and we compare it. If Current Value is same as Previous Value, we don't show it in grid and if it is not same then we show it.
/* The logic is that, while looping through datatable we save the
values of current column and previous column and we compare it.
If Current Value is same as Previous Value, we don't show it in grid
and if it is not same then we show it.
1. We save value of current column in variable 'strCurrentValue'.
2. After the loop we assign the value in 'strCurrentValue' to
variable 'strPreviousValue'.
3. And in next loop, we get new value in 'strCurrentValue'.
4. Now we can compare value in 'strCurrentValue' and 'strPreviousValue'
and accordingly show or hide values in the column.
*/
int m;
for(m=0;m<8;m++)
{
object cellValue = dt.Rows[m]["Category"]; // Here we catch the value form current column
strCurrentValue=cellValue.ToString().Trim(); // We assign the above value to 'strCurrentValue'
if(strCurrentValue!=strPreviousValue) // Now compare the current value with previous value
{
dt.Rows[m]["Category"]=strCurrentValue; // If current value is not equal to previous
// value the column will display current value
}
else
{
dt.Rows[m]["Category"]=string.Empty; // If current value is equal to previous value
// the column will be empty
}
strPreviousValue=strCurrentValue; // assign current value to previous value
}
strCurrentValue=string.Empty; // Reset Current and Previous Value
strPreviousValue=string.Empty;
Panel
#1. How to make a Panel or Label semi-transparent on a Windows Form?
You can make a panel or label transparent by specifying the alpha value for the Background color.
panel1.BackColor = Color.FromArgb(65, 204, 212, 230);
NOTE:In the designer you have to enter these values
manually in the edit box. Don't select the color using the ColorPicker.Buttons
latest and unanswered Questions in Rational Robot Interview Questions
Un-Answered in Rational Robot Interview Questions
1. How do we create or retrieve libary file from Verification point? Is it possible?
2. I need help if anybody knows how to execute Rational Robot Scripts Through Quality Center. Please let...
3. Can we do load testing on authorized fixed licenses if not then why?
4. can we do dynamic data verification using rational robot?what i want to do is use a single data verification...
Latest Questions in Rational Robot Interview Questions
1. How do we create or retrieve libary file from Verification point? Is it possible?
2. Q) How do we connect to database using Rational Robot?
3. I need help if anybody knows how to execute Rational Robot Scripts Through Quality Center. Please let...
4. Can we do load testing on authorized fixed licenses if not then why?
5. Is it possible to conduct Unit Testing using with Rational Robot ?if it is yes, how it is ?
6. What is the difference between GUI Scripts and VU scripts in Rational Robot?
7. what is the use of requisite pro?
8. What is UCM?
9. what is a datapool? what it contains ? how do we create a datapool?briefly give the explanation
10. what are two types of script recording in rational robot?
Testing Tools Interview Questions and Faqs-unanswered and latest
Un-Answered in Testing Tools Interview Questions and Faqs
1. we have Coffee,Tea, and milk vending machine. If we put 5 Rupee coin milk should be the output. 2 Rupee-...
2. What is Content Welder Tool in Testing.What is the use of content welder.
Latest Questions in Testing Tools Interview Questions and Faqs
1. we have Coffee,Tea, and milk vending machine. If we put 5 Rupee coin milk should be the output. 2 Rupee-...
2. what is bucket test?
3. What is testwear?
4. How can we handle exception in RATIONAL FUNCTIONAL TEST in order to do data driven test without stopping...
5. if we run a script in winrunner and let it test the applicataion which will take several hours to test....
6. What is RETESTING?whats the difference between retesting & regression testing?
7. Difference between Scenario & task
8. How to pause the execution in QTP and Loadrunner (like wait statement in winrunner)
9. what is compiled module?
10. What is Content Welder Tool in Testing.What is the use of content welder
The Software Testing Automation Framework
Software testing is an integral, costly, and time-consuming activity in the software development life cycle. As is true for software development in general, reuse of common artifacts can provide a significant gain in productivity. In addition, because testing involves running the system being tested under a variety of configurations and circumstances, automation of execution-related activities offers another potential source of savings in the testing process. This paper explores the opportunities for reuse and automation in one test organization, describes the shortcomings of potential solutions that are available “off the shelf,” and introduces a new solution for addressing the questions of reuse and automation: the Software Testing Automation Framework (STAF), a multiplatform, multilanguage approach to reuse. It is based on the concept of reusable services that can be used to automate major activities in the testing process. The design of STAF is described. Also discussed is how it was employed to automate a resource-intensive test suite used by an actual testing organization within IBM.
In late 1997, the system verification test (SVT) and function verification test (FVT) organizations with which I worked recognized a need to reduce per-project resources in order to accommodate new projects in the future. To this end, a task force was created to examine ways to reduce the expense of testing. This task force focused on improvement in two primary areas, reuse and automation. For us, reuse refers to the ability to share libraries of common functions among multiple tests. For purposes of this paper, a test is a program executed to validate the behavior of another program. Automation refers to the removal of human interaction with a process and placing it under machine or program control. In our case, the process in question was software testing. Through reuse and automation, we planned to reduce or remove the resources (i.e., hardware, people, or time) necessary to perform our testing.
To help illustrate the problems we were seeing and the solution we produced, I use a running example of one particular product for which I was the SVT lead. This product, the IBM OS/2 WARP* Server for e-Business, encompassed not only the base operating system (OS/2*—Operating System/2*) but also included the file and print server for a local area network (LAN) (known as LAN Server), Web server, Java** virtual machine (JVM), and much more. Testing such a product is a daunting, time-consuming task. Any improvements we could make to reduce the complexity of the task would make it more feasible.
For our purposes, a test suite is a collection of tests that are all designed to validate the same area of a product. I discuss one test suite in particular, known affectionately as “Ogre.” This test suite was designed to perform load and stress testing of LAN Server and the base OS/2. Ogre is a notoriously resource-intensive test suite, and we were looking at automation to help reduce the hardware, number of individuals, and time necessary to execute it.
With a focus on reducing the complexity of creating and automating our testing, we looked at existing solutions within IBM and the test industry. None of these solutions met our needs, so we developed a new one, the Software Testing Automation Framework (STAF). This paper explores the design of STAF, explains how STAF addresses reuse, and details how STAF was used to automate and demonstrably improve the Ogre test suite. The solution provided by STAF is quite flexible. The techniques presented here could be used by most test groups to enhance the efficiency of their testing process.
Planning consists of analyzing the features of the product to be tested and detailing the scope of the test effort. Design includes documenting and detailing the tests that will be necessary to validate the product. Development involves creating or modifying the actual tests that will be used to validate the product. Execution is concerned with actually exercising the tests against the product. Analysis or review consists of evaluating the results and effectiveness of the test effort; the evaluation is then used during the planning stage of the next testing cycle.
Reuse is focused on improving the development, and to a lesser extent the design, portions of the testing cycle. Automation is focused on improving the execution portion of the testing cycle. Although every product testing cycle is different, generally, most person-hours are spent in execution, followed by development, then design, planning, and analysis or review. By improving our reuse and automation, we could positively influence the areas where the most effort is expended in the testing cycle.
The following subsections look individually at the areas of reuse and automation and delineate the problems we faced in each of these areas.
Reuse. This subsection provides some examples from the OS/2 WARP Server for e-Business SVT team that motivated the desire for reuse. Within the team, there were numerous smaller groups that were focused on developing and executing tests for different areas of the entire project. We wanted to ensure that each of these groups could leverage common sets of testing routines. To better understand this desire for reuse, consider some of the potential problems surrounding the seemingly simple task of logging textual messages to a file from within a test. Several issues arise when this activity is left to be reinvented by each tester or group of testers, instead of using a common reusable routine. The problems are:
Log files are stored in different places: Some groups create log routines that store the log files in the directory in which the test is run. Others create log routines that store them in a central directory. This discrepancy makes it difficult to determine where all the log files for tests run on a given system are stored. Ultimately, you have to scour the whole system looking for log files.
Log file formats are different: Different groups order the data fields in a log record differently. This difference makes it difficult to write scripts that parse the log files looking for information.
Message types are different: One group might use “FATAL” messages where another would use “ERROR,” or one group might use “TRACE” where another would use “DEBUG.” This variation makes it difficult to parse the log files. It also increases the difficulty in understanding the semantic meaning of a given log record.
None of these problems is insurmountable, and many could be handled sufficiently well through a “standards” document indicating where log files should be stored, the format of the log records, and the meaning, and intended use, of message types. Nonetheless, this list provides justification for our desire for common and consistent reusable routines. Also, additional problems exist that cannot be addressed by adhering to standards.
Multiple programming languages. Our testers write a wide variety of tests in a variety of programming languages. When testing the C language APIs (application programming interfaces) of the operating system, they write tests in C. When testing the command line utilities of the operating system or applications with command line interfaces, they write tests in scripting languages such as REXX (which is the native scripting language of OS/2). When testing the Java virtual machine of the operating system, they write tests in the Java language. In order for our testers to use common reusable routines to perform such tasks as logging, described above, the routines needed to be accessible from all the languages they use.
Multiple codepages. OS/2 WARP Server for e-Business was translated into 14 different languages, among them English, Japanese, and German. It is not uncommon for problems to exist in one translated version but not in another. Therefore, we were responsible for testing all of these versions. Testing multiple versions introduces additional complexities in our tests, and in particular to any set of reusable components we wanted our testers to use. One specific aspect of this situation is the use of different codepages by different translated versions. A codepage is the encoding of a set of characters (such as those used in English or Japanese) into a binary form that the computer can interpret. Using different codepages means that one codepage can encode the letter “A” in one binary form and another can encode it in a different binary form. Hence, care must be taken when manipulating the input and output of programs that use different codepages—a situation our testers would frequently encounter when testing across multiple translated versions of our product. If our testers were going to use a common set of routines for reading and writing log files, those routines had to be able to handle messages not only in an English codepage, but also in the codepages used by the other 13 languages into which our product was translated.
Multiple operating systems. While we were directly testing OS/2 WARP Server for e-Business, it was essential for us to run tests on other operating systems, such as Windows** and AIX* (Advanced Interactive Executive*) to perform interoperability and compatibility testing with our product. If we wanted our testers to use common reusable routines to perform such tasks as logging, described above, the routines needed to be accessible from all the operating systems we used.
Existing automation components. As we examined the types of components that were continually being recreated by our teams, as well as those that would need to exist to support the types of automation we wanted to put in place (as described in the following subsection), we realized that we would need a substantial base of automation components. Some of these components included process execution, file transfer, synchronization, logging, remote monitoring, resource management, event management, data management, and queuing. Additionally, these components had to be available both locally and in a remote fashion across the network. If the solution did not provide these components, we would have to create them. Therefore, we wanted a solution that provided a significant base of automation components.
Automation. This subsection provides some examples, using the Ogre test suite, to motivate the need for automation. As was mentioned, this test suite was designed to test the LAN Server and base OS/2 products under conditions of considerable load and stress, where load means a sustained level of work and stress means pushing the product beyond defined limits. The test suite consists of a set of individual tests focused on a specific aspect of the product (such as transferring files back and forth between the client and server). These tests are executed in a looping pseudorandom fashion on a set of client systems. The set of client systems is typically large, ranging upwards of 128 systems. The set of servers that are being tested is usually very small, typically no more than three. The test suite executes on the client systems for an extended period of time, typically 24 to 72 hours. The combination of the number and configuration of clients and servers and the amount of run time represents a scenario. If all the clients and servers are still operational after the prescribed amount of time, the scenario is considered to be successful. Multiple scenarios are executed during a given SVT cycle.
Test suite execution. Our existing mechanism for starting or stopping a scenario was to have one or more individuals walk up to each client and start or stop the test suite. Given the situation of 128 clients spread throughout a large laboratory, this exercise is expensive, both in time and human resources. This method also introduces the potential of skipping one or more clients, which can have a significant impact on the scenario (such as not uncovering a defect due to insufficient load or stress). Therefore, we wanted a solution that would allow us to start and stop the scenario from a central “management console.”
Test suite distribution. As new tests were created or existing tests were modified, they needed to be distributed to all the client systems. Our existing mechanism consisted of one or more individuals walking around to each client copying the tests from diskettes. This method was complicated by the fact that the tests did not always exist in the exact same location on each client. Like the previous problem of test suite execution, this mechanism was very wasteful of time and human resources. It also introduced another potential point of failure whereby one or more clients do not receive updated tests, resulting in false errors. Therefore, we wanted a solution that provided a mechanism for distributing our tests to our clients correctly and consistently.
Test suite monitoring. While a scenario was running, we were responsible for continually monitoring it to ensure that no failures had occurred. Our existing mechanism consisted of one or more individuals walking around to each client system to look for errors on the system screen. Such monitoring was partially alleviated by the fact that the tests would emit audible beeps when an error occurred. The beeps generally made it possible to simply walk into the laboratory and “listen” for errors. Unfortunately, we still had to monitor the scenario after standard work hours and on the weekend, which meant having individuals periodically drive into work and walk around the laboratory looking and listening for errors. Again, this method was very wasteful of time and human resources. It was also a negative morale factor, since it was considered “grunt” work. Therefore, we wanted a solution that provided a remote monitoring mechanism so that the status of the scenario could be evaluated from an individual's office or by telneting in from home.
Test suite execution dynamics. The Ogre test suite was already very configurable. An extensive list of properties was defined in a configuration file that was read during test suite initialization (and cached in environment variables for faster access). These properties manipulated many aspects of the scenario, such as which resources were available on which servers, which servers were currently off line, and the ratios defining the frequency with which the servers were accessed relative to one another. This configurability allowed us, for example, to make a one-line change that would prevent the clients from accessing a given server (in case a problem was currently being investigated on it) or increase or decrease the stress one server received in relation to another. However, the only viable way to modify these parameters was to stop and start the entire scenario. As an example, assume that 36 hours into a 72-hour scenario, we found a problem with one of the servers. We could stop the scenario, change the configuration file to make the server unavailable, and then restart the scenario, which allowed us to exercise the remaining servers while the problem was being analyzed. Then, 12 hours later, when a fix for the problem had been created, we needed to bring the newly fixed server back into the mix. In order to do this, we had to stop and start the entire scenario, which effectively negated all of the run time we had accumulated on the other servers at that point. Similar situations arose when we needed to change server stress ratios or other configuration parameters. Therefore, we wanted a solution that would allow us to change configuration information dynamically during the execution of a scenario.
Another long-standing issue with Ogre was that we were only able to execute one instance of the test suite at a time on any given client. It was felt that the ability to execute multiple instances of the test suite on the same client at the same time would allow us to produce equivalent stress with fewer clients.
Test suite resource management. In order to make a connection to a server, the client must specify a drive letter (in the case of a file resource) or a printer port (in the case of a printer resource) through which the resource will be accessed. When running multiple instances of the test suite, race conditions arise surrounding which drive letter or printer port to specify at any given time. Therefore, we wanted a solution that allowed us to manage the drive letter and printer port assignments among multiple instances of the test suite.
Test suite synchronization. Some of our tests have strict, nonchangeable dependencies on being the only process on the system running that particular test. When running multiple instances of the test suite, we needed a way to avoid having multiple instances executing the same test simultaneously. Therefore, we wanted a solution that allowed us to synchronize access to individual tests.
Existing solutions
Because we had two separate problems (reuse and automation), we realized we might need to find two separate solutions. However, we were hoping to find a single solution that would address both problems. Our preferences, in order, were:
A single solution designed to solve both problems
Two separate solutions designed to work together
A solution to reuse, which provided components designed to support automation, from which we could build an automation solution
Two separate, disjoint solutions
In the following subsections, I describe existing solutions that we explored, how they addressed the problems of reuse and automation, and how they related to our solution preferences.
Scripting languages. Scripting languages such as Perl, Python, Tcl, and Java (although Java would not technically be considered a scripting language, since it does require programs to be compiled) are very popular in the programming industry as a whole, as well as within test organizations, since they facilitate a rapid development cycle.1 As programming languages, scripting languages are not intended to directly solve either reuse or automation. Additionally, they are not directly targeted at the test environment, although their generality does not preclude their use in a test environment. Despite these limitations, we felt that given the wide popularity of scripting languages and the almost fanatical devotion of their proponents, we should examine their potential for solving our problems.
Although scripting languages are not a direct solution to reuse or automation, scripting languages do have some general applicability to the problem of reuse. To begin with, they are available on a wide variety of operating systems. They also have large well-established sets of extensions. Although not complete from a test perspective, these extensions would provide a solid base from which to build. Additionally, some languages (notably Tcl and Java) provide support for dealing with multiple codepages.
The benefits of scripting languages would clearly place them in category 3 of our preferences. Unfortunately, these benefits are only available if one is willing to standardize on one language exclusively. As was mentioned earlier, our testers create tests in many different programming languages, and it would have been tremendously difficult to force them to switch to one common programming language. Even if we could have convinced all of the testers on our team, we could never have convinced all the testers in our entire organization (much less those in other divisions, or at other sites), with whom we hoped to share our solution. Therefore, we were unable to rely on scripting languages for our solution.
Test harnesses. A test harness is an application that is used to execute one or more tests on one or more systems. In effect, test harnesses are designed to automate the execution of individually automated tests.
A variety of different test harnesses are available. Each is geared toward a particular type of testing. For example, many typical UNIX** tests are written in shell script or the C language. These tests are generally stand-alone executables that return zero on success and nonzero on error. Harnesses such as the Open Group's Test Environment Toolkit (TET, also known as TETware) are designed to execute these types of tests on one or more systems.2 In contrast, a harness such as Sun's Java Test leverages the underlying Java programming language to create a harness that is geared specifically to tests written in the Java language. It would not be uncommon for a test team to use both of these harnesses. Additionally, it is not uncommon for test teams to create custom harnesses geared toward specialized areas they test, such as I/O subsystems and protocol stacks.
It is clear that test harnesses have direct applicability to the problem of automation. However, as a general rule, test harnesses only solve the execution part of the automation problem. This solution still leaves areas such as test suite distribution, test suite monitoring, and test suite execution dynamics unsolved. Additionally, test harnesses have no direct or general applicability to the problem of reuse. Thus, test harnesses are, at best, only part of the solution to category 4 of our preferences. That having been said, the proximity of test harnesses to the test environment made it likely that one or more test harnesses would play a role in our ultimate solution. However, we still needed to find a solution for reuse and determine which, if any, of the existing test harnesses we would use and extend to fill in the rest of the automation gaps.
CORBA. At a very basic level, CORBA** (Common Object Request Broker Architecture) is a set of industry-wide specifications that define mechanisms that allow applications running on different operating systems, and written in different programming languages, to communicate.3 CORBA also defines a set of higher-level services, sitting on top of this communication layer, that provide functionality deemed beneficial by the programming community at large (such as naming, event, and transaction services). It is important to understand that CORBA itself is not a product; it is a set of specifications. For any given set of operating systems, languages, and services, it is necessary to either find a vendor who has implemented CORBA for that environment, or, much less desirably, implement it oneself.
CORBA is not intended to directly solve the problems of reuse and automation. However, CORBA does have some general applicability to the problem of reuse. First, CORBA is supported on a wide variety of operating systems. Second, there is CORBA support for a wide variety of programming languages. Thus, CORBA solves two of our key reuse problems. In contrast, CORBA has no direct support for multiple codepages. Additionally, the set of available CORBA services is not geared toward a test environment, which is understandable given the general applicability of CORBA to the computer programming industry as a whole.
Given the above, CORBA would clearly fit in category 3 of our preferences, although significant work would be necessary to provide the missing support in terms of multiple codepages and existing automation components. Additionally, as we mentioned above, there is no one company that produces a product called “CORBA.” What this means is that for a complete solution one must frequently obtain products from multiple vendors and attempt to configure them to work together. This attempt has been notoriously difficult in the past,4 and, although the situation is improving, we would rather have avoided this layer of complication. All told, we felt that a CORBA solution was not worth the expense necessary to implement and maintain it.
The design of STAF
Having exhausted other avenues, we decided to create our own solution. We had a two-phased approach to the development of STAF. The first phase addressed the issue of reuse. This phase by itself would give us a solution that fell into category 3 of our solution preferences. The second phase tackled the problem of automation. In this phase we would build on top of the reuse solution and extend it to solve our automation problem. This two-step approach provided a solution that fell into category 1 of our solution preferences. The result of that work was the Software Testing Automation Framework, or STAF.
In the subsections that follow, I present the underlying design ideas surrounding STAF and how they helped provide a reuse solution. A subsequent section will then address how we built and extended this solution to solve the problem of automation.
Services. STAF was designed around the idea of reusable components. In STAF, we call these components services. Each service in STAF exposes a specialized set of functionality, such as logging, to users of STAF and other services. STAF, itself, is fundamentally a daemon process that provides a thin dispatching mechanism that routes incoming requests (from local and remote processes) to these services. STAF has two “flavors” of services, internal and external. Internal services are coded directly into the daemon process and provide the core services, such as data management and synchronization, upon which other services build. External services are accessed via shared libraries that are dynamically loaded by STAF.
This ability to provide services externally from the STAF daemon process allowed us to keep the core of STAF very small, while allowing users to pick and choose which additional pieces they wanted. It minimizes the infrastructure necessary to run STAF. Additionally, the small STAF core makes it easy to provide support on multiple platforms, and also to port STAF to new platforms.
Request-result format. Fundamentally, every STAF request consists of three parameters, all of which are strings. The first parameter is the name of the system to which the request should be sent. This parameter is analyzed by the local STAF daemon to determine whether the request should be handled locally or should be directed to another STAF system. Once the request has made it to the system that will handle it, the second parameter is analyzed to determine which service is being invoked. Finally, the third parameter, which contains data for the request itself, is passed into the request handler of the service to be processed.
After processing the request, the service returns two pieces of data. The first is a numeric return code, which denotes the general result of the request. The second is a string that contains request-specific information. If the request was successful, this information contains the data, if any, which were asked for in the request. If the request was unsuccessful, this information typically contains additional diagnostic information.
By dealing primarily with strings, we have been able to simplify many facets of STAF. First, there is only one primary function used to interface with STAF from any given programming language. This function is known as STAFSubmit( ), and its parameters are the three strings described above. Because of the simplicity of this interface, requests look essentially identical across all supported programming languages, which makes using STAF from multiple programming languages much easier. Adding support for a new programming language is also trivial, because only a very small API set must be exposed in the target language. Had we chosen to use custom APIs for each service, the work to support a new programming language would be significant, since we would be faced with providing interfaces to a much, much larger set of APIs.
Strings also make it easier to create and interface with external services. The primary interface for communicating with an external service consists of a method to pass the requisite strings in and out of the service. Additionally, by restricting ourselves to strings we are able to provide to services a common set of routines to parse the incoming request strings. Common routines allow service providers to simply define the format of their request strings and pass them to this common parser for validation and data retrieval, which helps ease the creation of reusable components. This leads to benefits in the user space as well, since all service request strings follow a common lexical format, which provides a level of commonality to all services. It also makes it easier to use services when switching from one programming language or operating system to another, because the request strings are identical regardless of the environment. Commonality has the added benefit of hiding the programming language choice of the caller and the service provider from one another.
A STAF request is initiated by the REXX program running on machine gamma (running Windows 2000). It is submitting the request “generate type Build subtype WebSphere_V4” to the event service on machine delta. In step 1 the REXX interpreter passes the request to the REXX API layer of STAF. In step 2, the REXX API layer passes the request to the C API layer. In step 3 the C API layer makes the interprocess communication (IPC) request to the STAF daemon process. At this point the STAF daemon determines that the request is destined for another system, which initiates step 4, a network IPC request to the STAF daemon on machine delta (running AIX Version 4.3.3). The STAF daemon on machine delta determines that the request is bound for the event service. This leads to step 5 where the request is passed to the Java service proxy layer, the layer responsible for communicating directly with the JVM, which is step 6. In step 7, the JVM invokes the corresponding method on the event service object. Upon receiving the request, step 8 shows the event service passing the request string to the common request parser of STAF for validation. At this point the event service would perform the indicated request and steps 1 through 7 would be reversed as the result was passed back to the REXX program on machine gamma.
There are a number of things to note about this request flow. First, it was quite easy to specify a network-oriented request from the point of view of the REXX program. Second, the machines in question are running different operating systems on different hardware architectures, and neither the REXX program nor the event service need be aware of this difference. Third, neither the REXX program nor the Java-based event service need be concerned with the language the other was using.
The decision to have STAF deal only with strings was the most crucial and beneficial decision we made while designing STAF. It has allowed us to keep STAF simple and flexible at the same time.
Unicode. Because we focus predominantly on strings and were concerned with codepage issues, STAF was designed to use Unicode** internally. When a call to STAFSubmit( ) is made, the input strings are converted to Unicode. All further processing is carried out in Unicode. Data are only converted out of Unicode when a result is passed back from STAFSubmit( ), or if STAF is forced to interact with the operating system or some other entity that does not accept Unicode strings. By processing data in Unicode, we keep the integrity of the data intact. For example, if a system using a Japanese codepage sends a request to log some data containing Japanese codepage characters to a system using an English codepage, the data are initially converted to Unicode (which maintains the integrity of the data) when the STAFSubmit() call is issued. The data are maintained in Unicode until another STAFSubmit() call is issued to retrieve the data. If the same system running the Japanese codepage requests the data, the data will be converted from Unicode back to the Japanese codepage, which preserves the integrity of the data, since the data were originally in the same codepage. The data retrieved will be the same data initially logged even though, for some indeterminate length of time, the data were being stored or maintained on a system using an English codepage. Thus, by using Unicode throughout STAF, we solved our problem of handling multiple codepages.
Available services. In order to solve our automation problems, we needed a set of components on which to build. As we built STAF, we kept this foremost in our minds and ensured that the services we developed included these essential automation components. Here we describe some of the services that STAF provides. We will see these services again later when we examine how they were used to create the solution to our automation problems.
Three core services in STAF are the handle, variable, and queue services. These services provide fundamental capabilities that are common across all services and provide a foundation from which to build. Unsurprisingly, these services expose the capabilities of handles, variables, and queuing in STAF.
Handles are used to identify and encapsulate application data in the STAF environment. When an application wishes to use STAF, it obtains a handle by calling a registration API. The handle returned is tied specifically to the registering application. In general, this is a 1-to-N mapping between applications and handles. An application may have more than one handle, but any given handle belongs to a single application. However, STAF does support special “static” handles that can be shared among applications. Each STAF handle has an associated message queue. This queue allows an application to receive data from other applications and services. It also forms the basis for local and network-oriented interprocess communication in STAF. Many services deliver data to an application via its queue. These queues allow applications to work in an event-driven manner similar to the approach used by many windowing systems.
STAF provides data management facilities through STAF variables. These STAF variables are used by STAF applications in much the same way that variables are used in a programming language. When a STAF request is submitted, any STAF variables in the request are replaced with their values. One of the powerful capabilities of STAF variables is that they can be changed outside of the scope of the running application. This capability provides the ability to dynamically alter the behavior of an application. For example, an application designed to apply a specific percentage of load on a system might allow the percentage to be specified through an environment variable or as a command line argument. In this case, once the application is running, the only way to change the load percentage is to stop the application and restart it with the altered environment variable or command line argument. Using STAF variables allows the value to be changed without stopping the application. The only change to the application would be to periodically reevaluate the value of the STAF variable. These STAF variables are stored in variable pools. Each STAF handle has a unique variable pool that is specific to that application. There is also a global variable pool that is common across all handles on a given STAF system. Commonality allows default values to be specified in the global variable pool, which can then be overridden on a handle-by-handle basis.
STAF provides several other services in addition to handle, variable, and queue. STAF provides synchronization facilities through the semaphore and resource pool services. The semaphore service provides named mutual exclusion (mutex) and event semaphores. Compared with native semaphores commonly provided by an operating system, STAF semaphores have two advantages. One, they are available remotely across the network. Two, they are more visible, meaning it is much easier, for example, to determine who owns a mutex semaphore and who is waiting on an event semaphore. The resource pool service provides a means to manage named pools of resources, such as machines, user identifiers, and licenses. In particular, it provides features for managing the content of the pools as well as synchronizing access to the elements in the pools.
STAF provides process execution facilities through the process service. This service allows processes on STAF systems to be started, stopped, and queried. It provides detailed control over the execution of processes including specification of environment variables, the working directory, input/output redirection, and effective user identification. The process service can also, at user request, deliver notifications when processes end. These notifications are delivered via the queuing facilities described earlier.
STAF provides file system facilities through the file system service. Currently, this service provides mechanisms for transferring files and accessing file content. Future versions of STAF will expand the capabilities of this service into file and directory management, such as directory creation and enumeration and file or directory deletion.
STAF provides logging facilities through the log service. At its most basic layer, this service provides time-stamped message logging based on levels, such as “FATAL,” “ERROR,” “WARNING,” and “DEBUG.” A variety of higher-level facilities are built on top of this foundation, including local and centralized logging, log sharing between applications, dynamic level-masking, and maintenance on active logs. The dynamic level-masking is of particular interest. Level-masking refers to the ability of the user to determine which logging levels will be stored in a log file. Messages with logging levels not included in the level-mask will be discarded. The fact that this feature is dynamic means that the level-mask can be changed while an application is running. For example, this ability would allow a user to “switch on” debug messages when a problem is encountered, without needing to stop and restart the application.
STAF provides remote monitoring facilities through the monitor service. This service provides a lightweight publish-query mechanism. Applications publish their state, which then allows other applications to remotely query it. The published state is a simple time-stamped string, yet this has proven sufficiently robust for monitoring the progress of typical tests and applications.
STAF provides event-handling facilities through the event service. This service provides standard publish-subscribe semantics. Applications register for specific types and, possibly subtypes, of events. Other applications generate events based on a type, subtype, and sets of properties (which are attribute/value pairs). The events are delivered via the queuing facilities described earlier.
In addition to the services described above, STAF makes it quite easy for groups to develop their own services to meet specific needs. These services can then become part of the set of service components available for use with STAF. The modular service-based nature of the platform provides the infrastructure for evolution and growth.
From reuse to automation
Having addressed reuse, we next focused on automation. Our plan was to build a solution on top of STAF by leveraging the automation components that it provides.
The first area we tackled was the execution of the Ogre test suite. Instead of trying to retrofit an existing test harness onto STAF, we chose to create a new one that was STAF-aware from the ground up. What we came up with was a program called the Generic WorkLoad processor or, in abbreviated form, GenWL (pronounced JEN-wall). This harness allows us to create a text file defining the configuration data for the scenario, the processes to be executed, and the systems on which they should be executed. This text file is called the workload file. Using GenWL, we are able to start or stop the entire workload with a single command from a central management console, which was our desired goal. GenWL also played an important role in solving other aspects of the automation problem, which are discussed below.
Next, we looked to solve the problems associated with executing more than one instance of Ogre on a given system. The two most pressing issues were test suite synchronization and resource management. To handle synchronized access to tests, we relied on the STAF semaphore service, in particular, its mutex semaphore support. This service allowed one instance of the test suite to gain exclusive access to a test and then release control once execution of that test was complete. To manage the drive letters and printer ports, we relied on the resource pool service of STAF. This service allowed us to set up separate pools for the drive letters and printer ports. The service then manages the access to entries within the pool. Thus, when one instance of the test suite requests a drive letter, we can be sure that no other instance of the test suite will obtain that drive letter until the first instance releases control of it back to the resource pool service. With these problems solved, we were able to run multiple instances of Ogre on our systems.
While making the synchronization and resource management changes described above, we found ourselves redistributing the test suite more often than usual, so in conjunction with the above changes, we also set out to solve the test suite distribution problem. Here we were able to leverage the file system and variable services of STAF. Using these two services, we wrote a small script that iterated through a list of clients in a file and used the file system service to copy each file. The variable service was used to deal with mapping the abstract destination defined in the copy command to the actual destination on each client. With the list of clients maintained in a file, we were assured the updated test suite was consistently distributed to all the clients.
With the problems of test suite distribution and execution solved, we next addressed the test suite monitoring problem. Here we leveraged the monitor service of STAF. Our test suite published its state to the monitor service every time it entered a subtest or when an error or warning occurred. Given the published information, we next developed a way to view this information using the GenWL execution harness. The workload file read by GenWL defines all the test suite instances; thus it is trivial for GenWL to interact with the monitor service to retrieve the published state for all the test suite instances. GenWL then displays this information on a system-by-system basis. With a single command from our management console, we were able to ascertain the current state of the entire Ogre scenario.
Although GenWL and the monitor service allowed us to determine the state of the scenario at any given point in time, this capability was not sufficient for us to determine what had transpired over extended periods of time (e.g., from one evening until the following morning). With GenWL and the monitor service, we could see the state as we left and when we came in, but we were still unaware as to any problems that had occurred in between.
To solve this problem we simply exchanged our current logging mechanism with calls to the log service of STAF. This exchange allowed us to use an approach similar to the one used to solve the test suite distribution problem. We created a simple script that iterated over a list of clients in a file and used the facilities of the log service to retrieve all the error and warning messages that had been logged over a given period of time. We were then able to ascertain which, if any, of those errors and warnings were true problems or simply artifacts of temporarily pushing a server beyond its capacity. Remember, Ogre is a load and stress test, so we expect to occasionally push the servers beyond their limits.
Finally, we were left with the problem of execution dynamics. To solve this problem, we leveraged GenWL again. As mentioned above, the workload file contains the configuration information for the scenario. As the workload file is processed, this configuration information is stored on each of the client systems using the STAF variable service. As the test suite executes, it retrieves the configuration information from the variable service. By using the variable service, we were able to update the configuration information dynamically. Thus, if we needed to change the configuration information, such as to reintroduce a server or change server stress ratios, we simply updated the appropriate values in the workload file and directed GenWL to push that value out to all the clients.
Issues
We have received surprisingly few complaints about STAF from our users. The vast majority of user issues concern clarifying the documentation or requesting new features (such as new services or extensions to existing services). We have also found and fixed isolated performance issues. For example, the log service was originally written in REXX, which proved to be unacceptably slow. We have since ported the log service to C++, which significantly improved its performance.
With respect to general overall performance, STAF requests do incur a minimal amount of overhead since they require an IPC request to go from the requesting process to the STAF daemon, plus the user's request string must be parsed (as opposed to dealing directly with raw data). This means STAF would not be appropriate for extremely low-latency requests. To date, we have not encountered this problem.
Benefits
By providing a reusable framework and reusable services, STAF has allowed teams to focus on directly solving their problems instead of inventing infrastructure. This advantage is illustrated with the tools developed for automating Ogre. The test distribution script and the log-querying script were both less than 50 lines of code. The scripts were so small because they were able to depend on the underlying STAF infrastructure and the services it provides. The GenWL program relies on a number of STAF services to perform its tasks. By reusing these services, GenWL is free to concern itself with the fundamental activities of parsing the command line parameters and the workload file. The remainder of the work is handled by STAF and includes setting the configuration information, starting and stopping the processes, and monitoring the test progress. This work is done with only nine commands in the GenWL program. We have found this type of usage to be fairly typical.
If we look at the application of STAF to our automation problem, we see significant savings arise. By overcoming our test suite synchronization and resource management problems, we were able to reduce the required number of client systems by approximately 33 percent, which in the largest case meant a reduction of 48 client systems. This reduction represents a very large savings in the hardware required to run the test suite.
By overcoming our test suite execution and test suite distribution problems, we were able to reduce the time it takes to restart a scenario based on a new build by roughly 50 percent. Our old manual procedure took us approximately eight hours. Our new automated procedure takes us approximately four hours. This difference is a significant reduction in time and is amplified even more when builds are received late in the day, e.g., 4:00 P.M. Because it previously took eight hours to start the scenario, we would typically begin working with the new build at approximately 8:00 A.M. the following morning. Thus the scenario was not actually running until 5:00 P.M. of that following day. However, with a reduction to four hours, someone can stay and have the scenario running by 8:00 P.M. the same night, which is an even more significant cycle-time reduction of 21 hours. In addition, it used to take several people to perform this work. Now one person can perform the work because we can manage everything from a central console. Thus, there are personnel savings as well.
A major benefit of overcoming our test suite monitoring problems was finding a number of defects in the product that would have gone undetected otherwise. Detecting problems before they reach the customer is a very significant source of savings, because problems found by customers are much more costly to fix than those found during testing.5 In addition, our new monitoring capabilities improved morale by removing the “grunt” work of performing periodic monitoring check-ins at night and on the weekend. If a problem was uncovered while monitoring remotely, we were sometimes able to perform remote diagnostics and solve the problem without coming to the site.
Finally, by overcoming our test suite execution dynamics problems, we were able to save time and personnel by reducing the frequency of scenario restarts. This reduction in restarts was yet another morale boosting item, since we no longer felt like we were “twiddling our thumbs” when running the scenario in a configuration that we knew would have to be restarted in mid-run.
Many times our group had contemplated fixing some of the problems in the Ogre test suite. We had elaborated a list of items that we would need to create in order to solve the problem. Evaluating this list in hindsight, we realized that what we actually needed was STAF. Had we addressed our list of items earlier, we would have ended up with a solution that was centered around our particular test suite, as opposed to the general solution, which is STAF. Instead, the reuse philosophy of STAF allowed us to pick up the reusable components it provides and solve our test suite problems.
Conclusion
To improve the efficiency and effectiveness of the testing process, groups need to find ways to improve their reuse and automation. As a solution to help address these issues, we created STAF. It was designed to solve our reuse problems and was then leveraged to solve our automation problems. Using STAF, we have generated considerable savings with respect to the people, time, and hardware necessary to perform testing.
Since its inception, STAF has been adopted by numerous test groups throughout IBM, and it is being used to create a variety of innovative testing solutions. In my organization alone, we have developed a pluggable solution that drives automated testing from build through results collection. When a new build becomes available, the test systems are automatically set up and installed. Then the test suites are executed automatically, and the results are collected for analysis. These types of solutions would be tremendously more difficult, if not impossible, to create without a solution such as STAF from which to build.
Applying Patterns to Software Testing
Why patterns?
Patterns are a way of helping people who design things. They were formalized by the architect Christopher Alexander. When done well, patterns accomplish at least three things:
They provide a vocabulary for problem-solvers. "Hey, you know, we should use a Null Object."
They focus attention on the forces behind a problem. That allows designers to better understand when and why a solution applies.
They encourage iterative thinking. Each solution creates a new context in which new problems can be solved.
Why test patterns?
We believe that testers lack a useful vocabulary, are hampered by rigid "one size fits all" methodologies, and face many problems whose solutions are underdescribed in the literature. Patterns can help with all of those things.
Moreover, the community of pattern writers is a healthy one that regularly spawns new and useful ideas. We testers should link up with it, and we might find its style of work useful as we look for new ideas.
What happens at the workshops?
We read (or "workshop") patterns to help their authors better express and understand them. This is an essential community-building activity.
We write patterns, individually or in pairs.
We talk about patterns. And other things
webtest tools
Log Analysis Tools
HTTPD Log Analyzers list - Includes categories for Access Analyzers, Agent Analyzers, Referrer Analyzers, Error Analyzers, Other Log Analyzers. Most extensive log analysis tool listing on the net. Includes listing of other log analyzer lists. The access analyzers list includes more than 100 listed with short descriptions of each, organized by platform.
Other Web Test Tools
LISA for Web Services/SOAP - Web services/SOAP test tool from iTKO, Inc. No-code SOAP/XML testing and WSDL exploration and test maintenance; supports active sessions, SSL, authentication and magic strings. Runs on any client and supports Java and .NET and any other SOAP-compliant web services.
Parasoft SOAtest - Scriptless web services test tool from Parasoft. Automatic test creation from WSDL, WSIL, UDDI and HTTP Traffic. Capabilities include WSDL validation, load and performance testing; graphically model and test complex scenarios. Automatically creates security penetration tests for SQL injections, XPath injections, parameter fuzzing, XML bombs, and external entities. Data-driven testing through data sources such as Excel, CSV, DB queries, etc. Support for JMS; MIME attachment support.
Charles - An HTTP proxy/monitor/Reverse Proxy that enables viewing all HTTP traffic between browser and the Internet, including requests, responses and HTTP headers (which contain the cookies and caching information). Capabilities include HTTP/SSL and variable modem speed simulation. Useful for XML development in web browsers, such as AJAX (Asynchronous Javascript and XML) and XMLHTTP, as it enables viewing of actual XML between the client and the server. Can autoconfigure browser's proxy settings on MSIE, Firefox, Safari. Java application from XK72 Ltd.
Paessler Site Inspector - A web browser that combines MSIE and Mozilla/Gecko into one program; it's Analyzing Browser allows switching between the two browser engines with the click of a mouse to compare. Freeware.
CookiePie Firefox Extension - Firefox extension from Sebastian Wain enabling maintenance of different cookies storage in different tabs and windows. For example developers working on web software supporting multiple users or profiles can use CookiePie to simultaneusly test their software with each user without needing to open a different browser.
HowsMyPage.com - Web site usability/review service allows web sites to receive free reviews of their web pages, written by other web developers. Determine public reception of a web project and get informed opinions on how to improve web site. Works by asking the user to submit the URL of their page, then to review someone else’s page using a structured review form.
Broken Link Preventer - Link checker that reports on broken links, reports statistics on user attempts to access broken links, and enables broken link prevention. Runs on server and constantly monitors site links.
WebUseCase - A simple browser designed only for test simulation, built on top of JUseCase and HtmlUnit. Provides a use-case recorder which can provide a 'tester experience'. Test creation involves associating GUI events with 'use case commands' created to describe what is intended in terms of the domain, utilizing the 'title' attribute of appropriate HTML tags.
HtmlFixture - Freeware tool to exercise and test web pages in conjunction with FitNesse. It permits making assertions about the structure of a page and to navigate between pages. Can run java script, submit forms, "click" links, etc. Similar to htmlunit, but does it without Java programming.
JsUnit - An open-source unit testing framework for client-side JavaScript in the tradition of the XUnit frameworks
WebPerformance Analyzer - Web development analysis tool from WebPerformance Inc. enables measurement, analysis, and tracking of web page performance during the design and development process. Capture/record complex web pages while browsing, viewing response times and sizes for all web pages and their contents. Examine request and response headers, cookies, errors and content; view pages in an integrated browser. SSL support; playback capabilities; low bandwidth simulation; specify performance requirements for flagging of slow pages. Standalone or Eclipse plugin versions
Eclipse TPTP Testing Tools Project - TPTP (Test & Performance Tools Platform) is a subproject of Eclipse, an open platform for tool integration. TPTP provides frameworks for building testing tools by extending the TPTP Platform. The framework contains testing editors, deployment and execution of tests, execution environments and associated execution history analysis and reporting. The project also includes exemplary tools for JUnit based component testing tool, Web application performance testing tool, and a manual testing tool. The project supports the OMG UML2 Test Profile.
Test Architect - Keyword-driven test automation tool from LogiGear helps increase test coverage. Built-in playback support for web-based application and other platforms.
Networking and Server Test Utilities - Small collection of web server and other test utilities.
SWExplorerAutomation - Web tool from Alex Furman creates an automation API for any Web application which uses HTML and DHTML and works with MSIE. The Web application becomes programmatically accessible from any .NET language. The SWExplorerAutomation API provides access to Web application controls and content. The API is generated using SWExplorerAutomation Visual Designer, which helps create programmable objects from Web page content. Features include script recording and VB/C# code generation. Free andpaid versions. Requires MSIE and Win 2000 or XP.
Morae - Usability test tool for web sites and software, from TechSmith Corp. for automated recording, analyzing and sharing of usability data. Consists of 3 components. A Recorder records and synchronizes video and data, creating a digital record of system activity and user interaction. A Remote Viewer enables geographically dispersed observers to watch usability tests from any location; it displays test user's computer screen along with a picture-in-picture window displaying the test participant's face and audio; Remote Viewer observers can set markers and add text notes. The Manager component includes integrated editing functionality for assembly of important video clips to share with stakeholders.
AutoTestFlash - Freeware tool by Tiago Simoes for recording and playing back UI Tests in flash applications. Source code also available.
Repro - Manual testing 'helper' tool that records desktop video, system operations in 7 different categories, system resource usage, and system configuration information. Allows user to save and review relevant information for bug reports, and compress the result into a very small file to replay, upload to a bug tracking system, and share with others. Instruments in memory the target application at runtime so no changes are required to application under test. For Windows.
URL2image.com - Service from HREF Tools to check web page appearance in different Browser/OS combinations. For anyone interested in css, web standards and elastic design; can specify the screen width(s), font magnification(s) and page position(s) for the proofs. Enter URL and receive back report with screenshots taken in real time on real hardware.
TestGen - Free open-source web test data generation program that allows developers to quickly generate test data for their web-services before publicly or internally releasing the web service for production.
EngineViewer and SiteTimer - Free basic services: EngineViewer - reports on how a search engine may view a webpage, from how it breaks down the HTML, to which links it extracts, how it interprets page's robot exclusion rules and more. SiteTimer service - Find out how long it takes various connection types to get a page, check all the graphical links to ensure they're correct, examine server's HTTP headers, more.
Fiddler - An HTTP Debugging tool by Eric Lawrence. Acts as an HTTP Proxy running on port 8888 of local PC. Any application which accepts an HTTP Proxy can be configured to run through Fiddler. Logs all HTTP traffic between between computer and the Internet, and allows inspection of the HTTP data, set breakpoints, and "fiddle" with incoming or outgoing data. Designed to be much simpler than using NetMon or Achilles, and includes a simple but powerful JScript.NET event-based scripting subsystem. Free, for Windows.
FREEping - Free ping software utility from Tools4ever which will ping all your Windows-based servers (or any other IP address) in freely-definable intervals. Will send a popup when one of the servers stops responding.
IP Traffic Test and Measure - Network traffic simulation and test tool from Omnicor Corp. can generate TCP/UDP connections using different IP addresses; data creation or capture and replay; manage and monitor throughput, loss, and delay.
VisitorVille - Site traffic monitoring tool from World Market Watch Inc. that depicts website visitors as animated characters in a virtual village; users can watch their web traffic as if they're watching a movie.
Sandra - 'System ANalyser, Diagnostic and Reporting Assistant' utility from SiSoftware. Provides large variety of information about a Windows system's hardware and software. Includes CPU, mainboard, drives, ports, processes, modules, services, device drivers, ODBC sources, memory details, environment settings, system file listings, and much more. Provides performance enhancing tips, tune-up wizard, file system and memory bandwidth benchmarking, more. Reporting via save/print/fax/email in text, html, XML, etc. Free, Professional, and other versions available in multiple languages.
Path Application Manager - Application Monitoring and management tool from Winmoore, Inc. Uses pattern recognition technology to peer deep inside customized or COTS applications, analogous to running an MRI scan. Enables enhancement of QA, testing, and troubleshooting with code coverage capabilities.
RAMP - Section 508 and W3C Accessibility Guidelines tool from Deque Systems that automates analysis and remediation of non-compliant web functionality.
Browser Cam - Service for web developers and testers that creates screen captures of web pages loaded in any browser, any version, any operating system. Allows viewing of web page appearance on Windows, Linux, Macintosh, in most versions of every browser ever released.
Dummynet - Flexible tool developed by Luigi Rizzo, originally designed for testing networking protocols, can be used in testing to simulate queue and bandwidth limitations, delays, packet losses, and multipath effects. Can be used on user's workstations, or on FreeBSD machines acting as routers or bridges.
HTTP Interceptor - A real-time HTTP protocol analysis and troubleshooting tool from AllHTTP.com. View all headers and data that travel between your browser and the server. Split-screen display and dual logs for request and response data. Interceptor also allows changing of select request headers on-the-fly, such as "Referrer" and "User Agent".
SpySmith - Simple but powerful diagnostic tool from Quality Forge; especially useful when testing web sites and web-based applications. It allows the user to peek inside I.E. Browser-based Documents (including those without a 'view source' command) to extract precise information about the DOM elements in an HTML source. SpySmith can also spy on Windows objects. For Windows. Free 90-day trial.
Co-Advisor - Tool from The Measurement Factory for testing quality of protocol implementations. Co-Advisor can test for protocol compatibility, compliance, robustness, security, and other quality factors. Co-Advisor's current focus is on HTTP intermediaries such as firewalls, filters, caching proxies, and XML switches. Other info: runs on FreeBSD packages, Linux RPMs, Windows (on-demand); available as on-line service, binaries, or source code.
PocketSOAP - Packet-capture tool by Simon Fell, with GUI; captures and displays packet data between local client and specified web server. Can log captures to disk. For Windows; binaries and source available; freeware. Also available is PocketXML-RPC and PocketHTTP.
TcpTrace - Tool by Simon Fell acts as a relay between client and server for monitoring packet data. Works with all text-based IP protocols. For windows; freeware
ProxyTrace - Tool by Simon Fell acts as a proxy server to allow tracing of HTTP data; can be used by setting browser to use it as a proxy server and then can monitor all traffic to and from browser. Freeware.
tcptrace - Tool written by Shawn Ostermann for analysis of TCP dumpfiles, such as those produced by tcpdump, snoop, etherpeek, HP Net Metrix, or WinDump. Can produce various types of output with info on each connection seen such as elapsed time, bytes, and segments sent and received, retransmissions, round trip times, window advertisements, throughput, and various graphs. Available for various UNIX flavors, for Windows, and as source code; freeware.
MITS.Comm - Tool from Omsphere LLC for simulating virtually any software interface (internal or external). Allows testing without pitfalls associated with live connections to other systems (TCP/IP, Ethernet, FTP, etc). Allows developers to test down to the unit level by simulating the internal software interfaces (message queues, mailboxes, etc.) Tool can learn what request/response scenarios are being tested for future tests and can work with any protocol, any message definitions, and any network. Also available: MITS.GUI
XML Conformance Test Suite - XML conformance test suites from W3C and NIST; contains over 2000 test files and an associated test report (also in XML). The test report contains background information on conformance testing for XML as well as test descriptions for each of the test files. This is a set of metrics for determining conformance to the listed W3C XML Recommendation.
Certify - Test automation management tool from WorkSoft, Inc. For managing and developing test cases and scripts, and generating test scripts. For automated testing of Web, client/server, and mainframe applications. Runs on Windows platforms.
HiSoftware AccVerify - Tool for testing site Accessibility & Usability, Searchability, Privacy and Intellectual Property policy verification, Overall Site Quality, Custom Checks and Test Suites to meet organization's standards. Can crawl a site and report errors; can also programmatically fix most common errors found. Runs on Windows.
HiSoftware Web Site Monitor - Tool allows user to monitor your server and send alerts, allows monitoring web sites for changes or misuse of your intellectual property in metadata or in the presented document; link validation.
Web Optimizer - Web page optimizing tool from Visionary Technologies intelligently compresses web pages to accelerate web sites without changing site's appearance. Removes unnecessary information in HTML, XML, XHTML, CSS, and Javascript and includes GIF and JPEG optimizer techniques.
HTML2TXT - Conversion utility that converts HTML as rendered in MS Internet Explorer into ASCII text while accurately preserving the layout of the text. Included with software are examples of using the control from within Visual Basic, Visual C++, and HTML.
Team Remote Debugger - Debugging tool from Spline Technologies allows tracing of any number of code units of any kind ( ASP, MTS, T-SQL, COM+, ActiveX Exe, DLL, COM, Thread, CFML ), written in any language ( ASP, VB, VC++, Delphi, T-SQL, VJ, CFML ) residing on multiple shared and dedicated servers at the same time, without ever attaching to process. Remote code can pass messages and dialogs directly to your local machine via Team Remote Debugger component, and developers can then debug their respective code independently of one another no matter if the code units reside on the same servers or on different servers or on any combination thereof.
Datatect - Test data generator from Banner Software generates data to a flat file or ODBC-compliant database; includes capabilities such as scripting support that allows user to write VBScripts that modify data to create XML output, data generation interface to Segue SilkTest, capability to read in existing database table structures to aid in data generation, wide variety of data types and capabilities for custom data types. For Windows.
Hypertrak - Suite of software protocol analyzers from Triometric accurately calculates end-to-end download speeds for each transaction, not just samples; produces a range of configurable reports that breaks down info into network and server speeds, errors, comparison to SLA's, performance for each server, client, URL, time period, etc. Runs on Solaris or Linux.
WebBug - Debugging tool from Aman Software for monitoring HTTP protocol sends and receives; handles HTTP 0.9/1.0/1.1; allows for entry of custom headers. Freeware.
WebMetrics - Web usability testing and evaluation tool suite from U.S. Govt. NIST. Source code available. For UNIX, Windows.
MRTG - Multi Router Traffic Grapher - free tool by Tobi Oetiker utilizing SNMP to monitoring traffic loads on network links; generates reports as web pages with GIF graphics on inbound and outbound traffic. For UNIX, Windows.