OpenSTA(Open System Testing Architecture)

 

OpenSTA(Open System Testing Architecture)

Performance Testing:

One or more Tests designed to investigate the efficiency of Web Application Environments (WAE). Used to identify any weaknesses or limitations of target WAEs using a series of stress Tests or load Tests.

Features Of OpenSTA(Open System Testing Architecture)

Product: Cyraine

Version: 1.4.1

Latest Version : 1.4.2

Purpose : Performance Testing Testing

Kinds of Application: Web based application and Distributed Application that support Http Protocol

Language: Script Control Language

Architecture: CORBA Architecture (Common Object Request Broker Architecture)

OpenSTA Architecture:

OpenSTA supplies a distributed software testing architecture based on CORBA which enables you to create then run Tests across a network.

Components Of OpenSTA:

Opensta is working based on the Three Components

• Name Server

• Commander

• Web Relay Daemon

Name Server:

OpenSTA Name Server configuration utility is the component that allows you to control your Test environment.(virtual user

Commander:

Commander is the Graphical User Interface that runs within the OpenSTA Architecture and functions as the front end for all Test development activity.

Web Relay Daemon:

Web Relay Daemon is used to map all the machines that need to connect to one another in an OpenSTA architecture which includes Web-based machines.(In Real Time Relay Running)

Repository Path:

It is used to allocate Some Memory For Testing Purpose.While Creating Repository Files required For Testing will be generated automatically.

Steps:

1. In Commander click Tools > Repository Path.

2. Browse the Folder

While repository creation It will create Three Folders

• Collectors
• Scripts
• Tests

Scripts:

Scripts form the content of an HTTP/S performance Test using OpenSTA. After you have planned a Test the next step is to develop its content by creating the Scripts you need.

Steps:

1.Rightclick->script->select new script-> HTTP

2.Rename the script

3.Double click the script.

4.Record ->specify the Url->navigate to different hyperlink

5.Select primary url->select url details to get information about url

6.Save the script

7.Press f5

The Environment Section

The Environment section is always the first part of a Script. It is introduced by the mandatory Environment keyword. It is preceded by comments written by the Gateway which note the browser used and the creation date.

The Definitions Section

The Definitions section follows the Environment section and is introduced by the mandatory Definitions keyword. It contains all the definitions used in the Script, including definitions of variables and constants, as well as declarations of timers and file definitions. It also contains the global_variables.INC file and Response_Codes.INC,

The Code Section

The Code section follows the Definitions section and is introduced by the mandatory Code keyword. It contains commands that represent the Web-activity you have recorded and define the Script’s behavior.

Query Results Pane

• HTML: Shows a browser view of the HTML document that has been retrieved.
• Structure: Shows the basic elements of the page in a collapsing tree view.
• DOM: Shows the page structure in the Document Object Model, as a collapsing tree view.
• Server Header: Shows the HTTP response headers that the Web server returned to the browser.
• Client Header: Shows the HTTP request headers provided by the browser for the Web server

Collectors:

A Collector is a set of user-defined data collection queries which determine the type of performance data recording carried out from one or more Host computers or devices during a Test-run.
There are Two Types Of Collectors

• NT Performance Collectors are used for collecting performance data from Hosts running Windows NT or Windows 2000.

• SNMP Collectors are used for collecting SNMP data from Hosts and other devices running an SNMP agent or proxy SNMP agent.

Tests:

A Test is a Collection of Tasks together to be assigned to the Vusers. Task is the script or collector sample. A test contain maximum 200 Task groups. Each Task group Contains Maximum 200 task

Steps:

1.In the controller window right click test->New test->tests
2.double click
3.move the script and place it in task.
4.Set the Test Environment

4.1 start tab ->Start Task Group(Immediately)->Stop Task Group(On completion).

4.2 In the vu’s Tab- Total Number of virtual user is 10

4.3 In the task1->no of times each user -3

Configuration: This is the default view when you open a Test and the workspace used to develop a Test.

Monitoring: Use this tab to monitor the progress of a Test-run.

Results: Use this tab to view the results collected during Test-runs in graph and table format.

In the Start Task Group section of the Properties Window

Scheduled: The Task Group starts after the number of days and at the time you set.

Immediately: The Task Group starts when the Test is started.

Delayed: The Task Group starts after the time period you set

In the Stop Task Group section of the Properties Window

Manually: The Task Group will run continuously until you click the Stop button
After fixed time: The Task Group is stopped after a fixed period of time.
On Completion: The Script-based Task Group is stopped after completing a number of iterations.

Analyse The Test Results:

Test Configuration

The Test Configuration display option consists of a summary of data collected during a Test-run. It provides data relating to the Task Groups, Scripts, Hosts and Virtual Users that comprised the Test-run.

Test Audit Log

The Test Audit log contains a list of significant events that have occurred during a Test-run. These include the times and details of Test initiation and completion, errors that may have occurred and Virtual User details.

Test Summary Snapshots

The Test Summary Snapshots option displays a variety of Test summary data captured during a Test-run

• TimeStamp: Gives the time of the Task execution.

• Executer Name: Provides the IP address of the machine on which the test executes.

• Avg Connection Time: Shows the average length of time for a TCP connection.

• Task Group ID: Shows the ID corresponding to the Task Group.

• Completed Iterations: Shows the number of times a task has been executed.

• Run Time: Indicates the total execution time of the Task.

• Total Users: Gives the total number of users.

• HTTP Requests: Shows the total number of HTTP requests within the Task.

HTTP Errors: Indicates the number of 4XX and 5XX error codes returned from the Web browser after the HTTP request has been sent. These error codes adhere to the World Wide Web Consortium (W3C) standards.
Bytes In: Gives the number of bytes received for the HTTP request results.

• Bytes Out: Shows the number of bytes sent for the HTTP request.

• Min Request Latency: Indicates the minimum length of time elapsed in milliseconds between sending an HTTP request and receiving the results.

• Max Request Latency: Shows the maximum length of time elapsed in milliseconds between sending an HTTP request and receiving the results.

• Average Request Latency: Gives the average length of time elapsed in milliseconds between sending an HTTP request and receiving the results.

• Task 1(VUs): Shows the number of virtual users for a Task.

• Task 1(Iterations): Gives the number of iterations for a Task.

• Task 1(Period): Shows the duration of a Task.

Note: If your Task Group consists of multiple Tasks, extra columns corresponding to the respective Task numbers are included in the Test Summary Snapshots table.

Test Report Log

The Test Report log is a sequential text file that is used to record information about a single Test-run. Usually, a single record is written to the Report log whenever a Test case passes or fails.

HTTP Data List

The HTTP Data List stores details of the HTTP requests issued by the Scripts included in a Test when it is run. This data includes the response times and codes for all the HTTP requests issued.
HTTP Monitored Bytes / Second v Elapsed Time
This graph shows the total number of bytes per second returned during the Test-run.

HTTP Response Time (Average per Second) v Number of Responses Graph
This graph displays the average response time for requests grouped by the number of requests per second during a Test-run.
HTTP Responses v Elapsed Time Graph
This graph displays the total number of HTTP responses per second during the Test-run.
HTTP Response Time v Elapsed Time Graph
This graph displays the average response time per second of all the requests issued during the Test-run.
HTTP Errors v Elapsed Time Graph
This graph displays a cumulative count of the number of HTTP server errors returned during the Test-run.
HTTP Active Users v Elapsed Time Graph
This graph displays the total number of active Virtual Users sampled at fixed intervals during a Test-run.

Timer List
The Timer List file gives details of the Timers recorded during a Test-run.

Timer Values v Active Users Graph
This graph is used to display the effect on performance as measured by timers, as the number of Virtual Users varies.

Timer Values v Elapsed Time Graph
This graph is used to display the average timer values per second.

Ramp Up:

It is used to increase the load to the server.It is used to test the performance of the application server at a peak point.
Interval between batches, specifies the period of time in seconds between each ramp up period.
Number of Virtual Users per batch, specifies how many Virtual Users start during the batch ramp up time.
Batch ramp up time (seconds), specifies the period during which the Virtual Users you have assigned to a batch start the Task Group

Steps:

Total number of virtual user –10

Introduce virtual users in batches-check it

Interval between batches –5

Number of virtual user per batch –2

Batch ramp up time – 5

Timer:

Start timer:

This command switches on the named stopwatch timer and writes a `start timer’ record to the statistics log.

End Timer:

This command switches off the named stopwatch timer and writes an `end timer’ record to the statistics log
In the Definition part of the script

Timer T1
In the code Section

Start timer t1
————
End Timer t1

Parametrisation:

It is used to pass multiple values to the specified object.

Acquire Mutex “M”
Next A
Set B=A
Log B
Report B
Release Mutex “M”

“+B+”

Regression Testing:

It is the retesting process in which all the modules present in the application are tested.
In OpenSTA, we perform regression testing using call script command.

Syntax:

Call script “name of the script to be called”

Eg:

Call Script “S1”

Single Stepping:

Single stepping is a debugging feature used to study the replay of Script-based Task Groups included in an HTTP/S load Test.

Use it to check your HTTP/S load Tests and to help resolve errors that may occur during a Test-run.

Steps:
1. Create a task group

2. In the Test Pane (Configuration tab) right click and select single step task group.

3. Check the tabs changes .

4. You will have monitoring and Result tab

5. Select whether you would single step using single vuser or all the vuser that you have configured in the configuration tab when you created the task group.

Follow me on social media: