Sep 29, 2014

SQL Functions

SQL Functions
SQL functions are similar to SQL operators in that both manipulate data items and both return a result. SQL functions differ from SQL operators in the format in which they appear with their arguments. The SQL function format enables functions to operate with zero, one, or more arguments.
function(argument1, argument2, ...) alias

SQL functions are used exclusively with SQL commands within SQL statements. There are two general types of SQL functions: single row (or scalar) functions and aggregate functions. These two types differ in the number of database rows on which they act. A single row function returns a value based on a single row in a query, whereas an aggregate function returns a value based on all the rows in a query.

SQL arithmetic functions are :
Functions     Description


This SQL ABS() returns the absolute value of a number passed as argument.
This SQL CEIL() will rounded up any positive or negative decimal value within the function upwards.
The SQL FLOOR() rounded up any positive or negative decimal value down to the next least integer value.
The SQL EXP() returns e raised to the n-th power(n is the numeric expression), where e is the base of natural algorithm and the value of e is approximately 2.71828183.
The SQL LN() function returns the natural logarithm of n, where n is greater than 0 and its base is a number equal to approximately 2.71828183.
This SQL MOD() function returns the remainder from a division.
This SQL POWER() function returns the value of a number raised to another, where both of the numbers are passed as arguments.
The SQL SQRT() returns the square root of given value in the argument.

SQL Character Functions:

  Character functions that return character values return values of the same datatype as the input argument. The length of the value returned by the function is limited by the maximum length of the datatype returned.
•    For functions that return CHAR or VARCHAR2, if the length of the return value exceeds the limit, then Oracle Database truncates it and returns the result without an error message.
•    For functions that return CLOB values, if the length of the return values exceeds the limit, then Oracle raises an error and returns no data.
Returns the character with the binary equivalent to n in the database character set.
CHR (n)
SELECT CHR(68)||CHR(79)||CHR(71) "Dog" FROM DUAL;
Returns the following result:
  Returns char1 concatenated with char2, where char1 and char2 are string arguments. This function is equivalent to the concatenation operator (||).
CONCAT(char1, char2)
This example uses nesting to concatenate three character strings:
SELECT CONCAT( CONCAT(ename, ' is a '), job) "Job"
FROM emp
WHERE empno = 7900;


  Returns char, with the first letter of each word in uppercase, all other letters in lowercase. Words are delimited by white space or characters that are not alphanumeric.


SELECT INITCAP('the soap') "Capitals" FROM DUAL;
Returns the following result:
The Soap


Returns a string argument char, with all its letters in lowercase. The return value has the same datatype as char, either CHAR or VARCHAR2.
Returns the following result:
ODBC Function
{fn LCASE (char)}
   Returns char1, left-padded to length n with the sequence of characters in char2; char2 defaults to a single blank. If char1 is longer than n, this function returns the portion of char1 that fits in n.
The argument n is the total length of the return value as it is displayed on your terminal screen. In most character sets, this is also the number of characters in the return value. However, in some multi-byte character sets, the display length of a character string can differ from the number of characters in the string.
LPAD(char1,n [,char2])
SELECT LPAD('Page1',15,'*.') "LPAD example" FROM DUAL;

Returns the following result:
LPAD example
Returns the string argument char, with its left-most characters removed up to the first character which is not in the string argument set, which defaults to (a single space).
LTRIM(char [, set])

Returns the following result:
LTRIM example
ODBC Function
{fn LTRIM (char) }      (trims leading blanks)

   Returns char1, right-padded to length n with char2 replicated as many times as necessary; char2 defaults to a single blank. If char1 is longer than n, this function returns the portion of char1 that fits in n.
The argument n is the total length of the return value as it is displayed on your terminal screen. In most character sets, this is also the number of characters in the return value. However, in some multi-byte character sets, the display length of a character string can differ from the number of characters in the string.
RPAD(char1,n [,char2 ])
SELECT RPAD('ename',12,'ab') "RPAD example"
FROM emp
WHERE ename = 'TURNER';

Returns the following result:
RPAD example
Returns the string argument char, with its right-most characters removed following the last character which is not in the string argument set. This defaults to ' ' (a single space).
RTRIM(char [,set])
Returns the following result:
RTRIM examp
Returns a portion of the string argument char, beginning with the character at position m and n characters long.
SUBSTR(char, m [, n ])
If m is positive, SUBSTR counts from the beginning of char to find the first character. If m is negative, SUBSTR counts backwards from the end of char. The value m cannot be 0. If n is omitted, SUBSTR returns all characters to the end of char. The value n cannot be less than 1.

TRANSLATE(char, from, to)
   Returns char with all occurrences of each character in from replaced by its corresponding character in to, where char, from, and to are string arguments.
Returns the following result:
11. TRIM
TRIM( [[<trim_spec >] char ]
    FROM ] string )

If <trim_spec> is omitted, then BOTH is implied. If char is omitted, then a space character is implied.
Removes leading and/or trailing blanks (or other characters) from a string.

Returns the following result:
  Returns the string argument char with all its letters converted to uppercase. The return value has the same datatype as char.
Returns the following result:
Aggregate functions in SQL:
   Aggregate functions return a single result row based on groups of rows, rather than on single rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses.
       They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database divides the rows of a queried table or view into groups. In a query containing a GROUP BY clause, the elements of the select list can be aggregate functions, GROUP BY expressions, constants, or expressions involving one of these. Oracle applies the aggregate functions to each group of rows and returns a single result row for each group.
This function can produced a single value for an entire group or table.They operate on sets of rows and return results based on groups of rows.
The general syntax for most of the aggregate function is as follows:
         aggregate_function( [ DISTINCT | ALL ] expression)
List of SQL Aggregate functions are :
 Returns the average value of a column n.
Example :


Returns the following result:

COUNT([* | [DISTINCT | ALL] expr})
Returns the number of rows in the query.
Example :
SELECT COUNT(*) "Total" FROM emp;
Returns the following result:
Returns the maximum value of an expression specified by the argument expr.
Returns the following result:
Returns the minimum value of an expression specified by the argument expr.
Returns the following result:

Returns the sum of values of n.
SELECT deptno, SUM(sal) TotalSalary FROM emp GROUP BY deptno;

Returns the following result:
--------- -----------
       10        8750
       20       10875
       30        9400


SQL Count function
The SQL COUNT function returns the number of rows in a table satisfying the criteria specified in the WHERE clause. It sets on the number of rows or non NULL column values.
SQL Sum function
The SQL AGGREGATE SUM() function returns the sum of all selected column.
SQL Avg function
The SQL AVG function calculates the average value of a column of numeric type. It returns the average of all non NULL values
SQL Max function
The aggregate function SQL MAX() is used to find the maximum value or highest value of a certain column or expression. This function is useful to determine the largest of all selected values of a column.
SQL Min function
The aggregate function SQL MIN() is used to find the minimum value or lowest value of a column or expression. This function is useful to determine the smallest of all selected values of a column.

Date Functions in SQL:

ADD_MONTHS:Adds the specified number of months to a date.
LAST_DAY:Returns the last day in the month of the specified date.
MONTHS_ BETWEEN:Calculates the number of months between two dates.
NEW_TIME:Returns the date/time value, with the time shifted as requested by the specified time zones.
NEXT_DAY:Returns the date of the first weekday specified that is later than the date.
SYSDATE:Returns the current date and time in the Oracle Server.
Conversion Functions:
    Conversion functions convert a value from one datatype to another. Generally, the form of the function name follows the convention datatype TO datatype. The first datatype is the input datatype; the last datatype is the output datatype. There are 5 conversion functions.
iii.TO _DATE

Converts data from one type to another type.

SELECT CAST ( <source_operand > AS <data_type > ) FROM DUAL;
ii.TO_ CHAR:
 Converts a date or number to a value of the VARCHAR2 datatype, using the optional format fmt.
Syntax for Dates:
TO_CHAR(d [, fmt])
Syntax for Numbers:
TO_CHAR(n [, fmt])

iii.TO _DATE:
     The function takes character values as input and returns formatted date equivalent of the same. The TO_DATE function allows users to enter a date in any format, and then it converts the entry into the default format .
TO_DATE( string1, [ format_mask ], [ nls_language ] )

 SELECT TO_DATE('January 26, 1996, 12:38 A.M.', 'Month dd YYYY HH:MI A.M.') FROM DUAL;
Returns the following result:
1996-01-26 12:38:00
Converts a character string from one character set to another.
The value_exp argument is the value to be converted.
The data_type argument is the name of the character set to which char is converted.
{ fn CONVERT(value_exp, data_type) }

"Conversion" FROM DUAL;

Returns the following result:

  The TO_NUMBER function converts a character value to a numeric datatype. If the string being converted contains nonnumeric characters, the function returns an error.
TO_NUMBER (string1, [format], [nls_parameter])


 SELECT  TO_NUMBER('121.23', '9G999D99')


(I) It is also called as “Analytical function”
(II) This is two types
•    Rank
•    Dense_rank
(III) These two functions are used to calculate rank of a particular value from the set of values.
Calculates the rank of a value in a group of values
Syntax  :-
                Rank( ) over (set of values)

          The DENSE_RANK function acts like the RANK function except that it assigns consecutive ranks.
      Dense_rank() over [set of values]
General Functions:
  General functions are used to handle NULL values in database. The objective of the general NULL handling functions is to replace the NULL values with an alternate value. We shall briefly see through these functions below.
The NVL function substitutes an alternate value for a NULL value.
NVL( Arg1, replace_with )
As an enhancement over NVL, Oracle introduced a function to substitute value not only for NULL columns values but also for NOT NULL columns. NVL2 function can be used to substitute an alternate value for NULL as well as non NULL value.
NVL2( string1, value_if_NOT_null, value_if_null )


 SQL> SELECT NVL2(JOB_CODE, 'Job Assigned', 'Bench')
FROM employees;
The NULLIF function compares two arguments expr1 and expr2. If expr1 and expr2 are equal, it returns NULL; else, it returns expr1. Unlike the other null handling function, first argument can't be NULL.

NULLIF (expr1, expr2)
COALESCE function, a more generic form of NVL, returns the first non-null expression in the argument list. It takes minimum two mandatory parameters but maximum arguments has no limit.
COALESCE (expr1, expr2, ... expr_n )
GROUP BY clause:
The GROUP BY clause will gather all of the rows together that contain data in the specified column(s) and will allow aggregate functions to be performed on the one or more columns. This can best be explained by an example:
GROUP BY clause

SELECT column1,

FROM "list-of-tables"

GROUP BY "column-list";
Let's say you would like to retrieve a list of the highest paid salaries in each dept:

SELECT max(salary), dept

FROM employee

GROUP BY dept;
This statement will select the maximum salary for the people in each unique department. Basically, the salary for the person who makes the most in each department will be displayed. Their, salary and their department will be returned.
The HAVING clause:
The HAVING clause enables you to specify conditions that filter which group results appear in the final results.
The WHERE clause places conditions on the selected columns, whereas the HAVING clause places conditions on groups created by the GROUP BY clause.
The syntax for the SQL HAVING Clause is:
SELECT expression1, expression2, ... expression_n,
       aggregate_function (expression)
FROM tables
WHERE conditions
GROUP BY expression1, expression2, ... expression_n
HAVING condition;
Parameters or Arguments
aggregate_function can be a function such as SUM, COUNT, MIN, MAX, or AVG functions.
expression1, expression2, ... expression_n are expressions that are not encapsulated within an aggregate function and must be included in the GROUP BY Clause.
condition is the condition that is used to restrict the groups of returned rows. Only those groups whose condition evaluates to TRUE will be included in the result set.
 ORDER BY clause:
The SQL ORDER BY clause is used to sort the records in the result set for a SELECT statement.
The syntax for the SQL ORDER BY clause is:
SELECT expressions
FROM tables
WHERE conditions
ORDER BY expression [ ASC | DESC ];
Parameters or Arguments
expressions are the columns or calculations that you wish to retrieve.
tables are the tables that you wish to retrieve records from. There must be at least one table listed in the FROM clause.
conditions are conditions that must be met for the records to be selected.
ASC is optional. It sorts the result set in ascending order by expression (default, if no modifier is provider).
DESC is optional. It sorts the result set in descending order by expression.

Sep 26, 2014

UFT Tutorial for beginner

Introduction on UFT Test Tool

I) what is UFT?

•    Unified Functional Testing, It is a Functional and Regression Test Tool from HP
•    UFT, is an advanced version of QTP, UFT = QTP + Service Tools
•    QTP supports GUI testing only where as UFT supports GUI and API Testing.
•    QTP as well as UFT support little bit Performance Testing and Reliability Testing.
•    UFT Supports Windows operating only.

II) UFT Tool Architecture

•    UFT developed in .NET Technology.
•    UFT is a one tier Application or Desktop Application. It doesn’t have any database, then how it stores its resources? It stores its resources using file format on Hard disk.
•    UFT IDE has Record and Run features to design and execute tests and It has checkpoints, Output values, Transaction points etc… features for enhancing Tests.
•    UFT has integrated Tools for Batch Testing, encoding Passwords and Test Results deletion etc…
•    UFT has VBScript Engine to apply programming logic to our Tests.
•    It has 2 programming interfaces, one is UFT Tool Editor another is Function Library.
•    UFT has an Integrated MS Access engine for Database Testing and other Data related operations.

III) Object based Test Tool

•    UFT is an object based Test Tool based on front-end objects it supports Test Operations.
•    For Database Testing no front end object reference is required.
•    Software Objects example in Windows based Applications; Window, Dialog box, Edit box, Drop-down box, List box, Combo box, Button, Radio button, Check box etc… In web based applications; Browser, Page, Link, Image, Edit box, Drop-down box, List box, Combo box, Button, Radio button, Check box etc…
•    In UFT Test Automation we work with 4 types of Objects, they are Run-time objects, test objects, utility objects and automation objects.

IV) Test Design in UFT

•    UFT has recording feature to design tests or we can write tests by adding objects to Object repository otherwise we can use Descriptive programming for generating tests.
•    Using UFT Tool features like Checkpoints, Output values, transaction points etc… we can enhance Tests otherwise we can use VBScript features like Flow control statements, Functions, automation objects etc…

V) Test Execution in UFT

•    For Single Test Run we can use Run Command.
•    For Batch Testing we can Test Batch Runner Tool.
•    For Step by Step execution we can use Debug commands like Step into, Step Over, Step Out etc…

VI) Integration with ALM

•    UFT can integrate with ALM (Application Life Cycle Management), It is Test Management Tool from HP.
•    To Integrate UFT with ALM, we need to install ALM Add in for UFT.

VII) Challenges in UFT Test Automation

•    Object Identification, Basically it is an object based on front end objects it performs test operations, But sometimes UFT may not recognize some objects even though we load appropriate add ins.
•    Handling huge amount of objects, some applications may have thousands of objects, handling thousands of objects is difficult.
•    Executing web tests using different browsers like Google chrome, Mozilla fire fox etc…

Sep 22, 2014

Overview on RDBMS

Overview on RDBMS


RDBMS is a term used to describe an entire suite of programs for both managing a relational database and communicating with that relational database engine. Sometimes Software Development Kit (SDK) front-end tools and complete management kits are included with relational database packages (eg: MS Access) In other words, an RDBMS is both the database engine and any other tools that come with it.

What is RDBMS? 

    RDBMS stands for Relational Database Management System. RDBMS data is structured in database tables, fields and records. Each RDBMS table consists of database table rows. Each database table row consists of one or more database table fields.

          RDBMS store the data into collection of tables, which might be related by common fields (database table columns). RDBMS also provide relational operators to manipulate the data stored into the database tables. Most RDBMS use SQL as database query language.


What is database?

A database is a logically coherent collection of data with some inherent meaning, representing some aspect of real world and which is designed, built and populated with data for a specific purpose.

 What is DBMS?

It is a collection of programs that enables user to create and maintain a database. In other words it is general-purpose software that provides the users with the processes of defining, constructing and manipulating the database for various applications.

What is a Database system?
The database and DBMS software together is called as Database system.

Disadvantage in File Processing System
 Data redundancy & inconsistency.
 Difficult in accessing data.
Data isolation.

Data integrity.
 Concurrent access is not possible.
 Security Problems.
Describe the three levels of data abstraction
The are three levels of abstraction:
 Physical level: The lowest level of abstraction describes how data are stored.
 Logical level: The next higher level of abstraction, describes what data are stored in database and what relationship among those data.
 View level: The highest level of abstraction describes only part of entire database.

 What is System R? What are its two major subsystems?

System R was designed and developed over a period of 1974-79 at IBM San Jose Research Center. It is a prototype and its purpose was to demonstrate that it is possible to build a Relational System that can be used in a real life environment to solve real life problems, with performance at least comparable to that of existing system.
Its two subsystems are
 Research Storage
System Relational Data System. 

How is the data structure of System R different from the relational structure

Unlike Relational systems in System R
Domains are not supported
Enforcement of candidate key uniqueness is optional
Enforcement of entity integrity is optional
Referential integrity is not enforced 

 What is Data Independence 

Data independence means that 'the application is independent of the storage structure and access strategy of data'. In other words, The ability to modify the schema definition in one level should not affect the schema definition in the next higher level.
     Two types of Data Independence:
 Physical Data Independence: Modification in physical level should not affect the logical level.
Logical Data Independence: Modification in logical level should affect the view level.
NOTE: Logical Data Independence is more difficult to achieve 

What is a view?How it is related to data independence

A view may be thought of as a virtual table, that is, a table that does not really exist in its own right but is instead derived from one or more underlying base table. In other words, there is no stored file that direct represents the view instead a definition of view is stored in data dictionary. Growth and restructuring of base tables is not reflected in views. Thus the view can insulate users from the effects of restructuring and growth in the database. Hence accounts for logical data independence.

 What is Data Model?

A collection of conceptual tools for describing data, data relationships data semantics and constraints. 

What is E-R model

    This data model is based on real world that consists of basic objects called entities and of relationship among these objects. Entities are described in a database by a set of attributes
The Entity-Relationship data model is based on a perception of the real world which consists of basic objects called entities and relationships among theseobjects.


    A key is an attribute or set of attributes of an entitywhich can be used to identify it.
 Super Keyis a set of one or more attributes which, takencollectively, allows an entity to be                                      uniquelyidentified in an entity set
  Candidate Keyis a super key for which no proper subset is a superkey (i.e., a minimal super key)
Primary Keyis a candidate key chosen by the database designeras the principle means of identifying entities withinan entity set.

ER Model: Constraints

     Key Constraints: These are constraints implied by the existence of candidate keys. The table definition includes aspecification implying uniqueness of the attributesconstituting the primary key or alternate keys.
    A primary key constraint also implies a no-nulls constraint.Referential Constraints:Constraints implied by the existence of foreign keys inthe table definition.Other Constraints:Constraints enforcing checks of business logic of theapplication in the table definition.

Features of RDBMS

Relational database management system has various following features:-
1-It can solve any complex queries.
2-RDBMS is very secure.A fully RDBMS can prevent from any unauthorized access.

Advantages of RDBMS over DBMS:

•    Reliability is improved because the data is not spread across the network and several applications. Only one process handles the data.
•    Network traffic is greatly reduced. Let’s say in desktop database model – entire database along with indexes is to be sent to client. Where as in Client/server database model only result is to be sent to client.
•    Upgrading a heavily used desktop database to a well-designed client/server database will reduce database-related network traffic by more than 95%.
•    Performance is improved as database operations are handled over server, so client PCs will have less processing requirement.
•    Security is improved as data are kept within a single server. Hacking into a data file that is protected within the server is much more difficult than hacking into a data file on desktop database model.
•    Data integrity constraints and business rules can be enforced at server level. You can specify rules such as “Marks of any subject can not exceed 100 in any subject”
•    Data Sharing: SQL is used to coordinate data sharing by concurrent users, ensuring that they do not interfere with one another.
•    Data can be integrated using multiple platform i.e. LAN, WAN and WAN. One can incorporate data from INTERNET

CODD Rules:

A relational database management system (RDBMS) is a database management system (DBMS) that is based on the relational model as introduced by E. F. Codd. Most popular commercial and open source databases currently in use are based on the relational model.
A short definition of an RDBMS may be a DBMS in which data is stored in the form of tables and the relationship among the data is also stored in the form of tables.

E.F. Codd, the famous mathematician has introduced 12 rules for the relational model for databases commonly known as Codd's rules. The rules mainly define what is required for a DBMS for it to be considered relational, i.e., an RDBMS. There is also one more rule i.e. Rule00 which specifies the relational model should use the relational way to manage the database. The rules and their description are as follows:-
Rule 0: Foundation Rule

A relational database management system should be capable of using its relational facilities (exclusively) to manage the database.

Rule 1: Information Rule
All information in the database is to be represented in one and only one way. This is achieved by values in column positions within rows of tables.

Rule 2: Guaranteed Access Rule
All data must be accessible with no ambiguity, that is, Each and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name.

Rule 3: Systematic treatment of null values
Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.

Rule 4: Dynamic On-line Catalog Based on the Relational Model
The database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data. The authorized users can access the database structure by using common language i.e. SQL.

Rule 5: Comprehensive Data Sublanguage Rule
A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
a.    data definition
b.    view definition
c.    data manipulation (interactive and by program)
d.    integrity constraints
e.    authorization
f.    Transaction boundaries (begin, commit, and rollback).

Rule 6: View Updating Rule
All views that are theoretically updateable are also updateable by the system.

Rule 7:  High-level Insert, Update, and Delete
The system is able to insert, update and delete operations fully. It can also perform the operations on multiple rows simultaneously.

Rule 8: Physical Data Independence
Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods.

Rule 9: Logical Data Independence

Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit unimpairment are made to the base tables.

Rule 10: Integrity Independence
Integrity constraints specific to a particular relational database must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.

Rule 11: Distribution Independence
The data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.

Rule 12: Nonsubversion Rule
If a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.


•    Normalisation is a design techniques that is widely used as guide in designing relational database.
•    It is a two-step process that puts data into tabular form by removing repeating groups ant then removes duplicated data from relational table.
•    Thus Normalisation is the process of structuring an unstructured relation into structural one with the purpose of removing redundancy and anomalies.
•    Normalisation theory is based on normal forms.A relational table is said to be particular normal forms if it satisfy a certain set of constraints.

Functional Dependency : It describe the relationship between 2 attributes of same relational database tables.One of the attributes is called determinant and other attribute is called determined.
For eg.- For each value of determinant there is associated one and only one value of determined.
               A -> B
The A is determinant and B is determined then we say that A functionally determines B or Bis functionally dependent on A.

Fully Functional Dependence : It states that if A and B are two attributes of a relation then B is Fully Functionaly Dependent on A if B is Functional Dependent on A and not a proper subset of A.

Transitive Dependency : If Bis functional Dependent on A and C is functional dependent on B then C is transitive dependent on A.

The following are the types of normal forms:

First Normal Form (1NF)

When a table is broken up (decomposed)into more tables with all repeating groups(records) of data eliminated, table data is said to be in the first normal form (1NF).

A table is said to be in 1st Normal form if:

(a) There is no repeating group.
(b) All the key attributes are defined.
(c) All the non-key attributes are dependent on a primary key.

Second Normal Form (2NF)

A table is said to be in 2nd Normal Form(2NF) if each record is in the table is in the First Normal form(1NF) and each column in the record is fully dependent on its primary key.

A table is in 2nd Normal Form if:

(a) It is in 1st Normal Form.
(b) If no non-key attribute is dependent on a part of composite key( combination of two or more attributes declared as primary key), that is all the attributes must be dependent on the whole composite key not just a part of it.

Note: 2nd Normal Form can be applied only to the table which has any composite key.

Third Normal Form (3NF)

Table is said to be in 3rd Normal Form when all the transitive dependencies are removed from the data.
Transitive dependency is the dependency of a non-key attribute on another non-key attribute of the table.

A table is said to be in 3NF if:

(a) It is in 2NF.
(b) It doesn't contain any transitive dependencies.

There are certain situations when normalization can be avoided. Those situations are as follows:

(a) There is/are no repeating group/groups in the table.
(b) A primary key is defined.
(c) All non-key attributes are fully dependent on key attribute(primary key) or key attributes (composite key).
(d) There is no transitive dependencies.

Boyce Codd Normal Form (BCNF): A relation is said to be in BCNF if and only if the determinants are candidate keys. BCNF relation is a strong 3NF, but not every 3NF relation is BCNF.

Overview on DBMS

Overview on DBMS

         Oracle is a relational database management system, which Organizes data in the form of tables. Oracle is one of many database servers based on RDBMS model, which manages a seer of data that attends three specific things-data structures, data integrity and data manipulation. With oracle cooperative server technology we can realize the benefits of open, relational systems for all the applications. Oracle makes efficient use of all systems resources, on all hardware architecture; to deliver unmatched performance, price performance and scalability

Components In DBMS?

A database management system has three components:

A data definition language (DDL) is the formal language programmers use to specify the structure of the content of the database. DDL defines each data element as it appears in the database before that data element is translated into the forms required by application programs. With this help a data scheme can be defined and also changed later.

A data manipulation language (DML) is a language for the descriptions of the operations with data like store, search, read, change, etc. the so-called data manipulation, is needed. Typical DML operations (with their respective keywords in the structured query language SQL):

Add data (INSERT)
Change data (UPDATE)
Delete data (DELETE)
Query data (SELECT)

Data Dictionary: This is an automated or manual file that stores definitions of data elements and data characteristics, such as usage, physical representation, ownership (who in the organization is responsible for maintaining the data), authorization, and security.

Importance of DBMS
      A database management system is important because it manages data efficiently and allows users to perform multiple tasks with ease. A database management system stores, organizes and manages a large amount of information within a single software application. Use of this system increases efficiency of business operations and reduces overall costs.

        Database management systems are important to businesses and organizations because they provide a highly efficient method for handling multiple types of data. Some of the data that are easily managed with this type of system include: employee records, student information, payroll, accounting, project management, inventory and library books. These systems are built to be extremely versatile.
Without database management, tasks have to be done manually and take more time. Data can be categorized and structured to suit the needs of the company or organization. Data is entered into the system and accessed on a routine basis by assigned users. Each user may have an assigned password to gain access to their part of the system. Multiple users can use the system at the same time in different ways.

What is the need of DBMS ?

A database management system (DBMS) can help address the employee count scenario and a range of even more complex situations related to cost, order status or inventory management by presenting the same data to everyone in the business at the same time. A DBMS also eliminates the frustrating hunt for the right version of the right spreadsheet on a vast and disorganized network drive.

• As businesses grow, the volume of data they accumulate grows exponentially. Managing this data deluge becomes increasingly difficult just at the moment when superior data management becomes more important to business success.

• As businesses expand, more sophisticated tools are needed to manage data. Tools that serve start-ups well are overwhelmed by the demands faced by larger businesses.

• A database management system (DBMS) is a powerful tool used to store data, secure it, protect it and make it quickly available to people who need it.

• A DBMS enables a business to squeeze more value from the data it collects for improved decision-making.

What are Advantages and Disadvantages of DBMS?

The advantages and disadvantages of DBMS are as follows:
Reduced data redundancy
Reduced updating errors and increased consistency
Greater data integrity and independence from applications programs.
Improved data access to users through use of host and query languages.
Improved data security.
Reduced data entry, storage, and retrieval costs.
Facilitated development of new applications program.
Database systems are complex, difficult, and time-consuming to design.
Substantial hardware and software start-up costs.
Damage to database affects virtually all applications programs.
Extensive conversion costs in moving form a file-based system to a database system.
Initial training required for all programmers and users.

Database Models:

Database systems can be based on different data models or database models respectively. A data model is a collection of concepts and rules for the description of the structure of the database. Structure of the database means the data types, the constraints and the relationships for the description or storage of data respectively.

Hierarchical Model:
In a hierarchical DBMS one data item is subordinate to another one. This is called a parent-child relationship. The hierarchical data model organizes data in a tree-like structure.
One of the rules of a hierarchical database is that a parent can have multiple children, but a child can only have one parent. For example, think of an online store that sells many different products. The entire product catalog would be the parent, and the various types of products, such as books, electronics, etc., would be the children. Each type of product can have its own children categories.

Network Model:
In a network DBMS every data item can be related to many others ones. The database structure is like a graph. This is similar to the hierarchical model and also provides a tree-like structure. However, a child is allowed to have more than one parent. In the example of the product catalog, a book could fall into more than one category. The structure of a network database becomes more like a cobweb of connected elements

Relational Models:
In a relational DBMS all data are organized in the form of tables. This DBMS model emerged in the 1970s and has become by far the most widely used type of DBMS. Most of the DBMS software developed over the past few decades uses this model. In a table, each row represents a record, also referred to as an entity. Each column represents a field, also referred to as an attribute of the entity.
A relational DBMS uses multiple tables to organize the data. Relationships are used to link the various tables together. Relationships are created using a field that uniquely identifies each record. For example, for a table of books, you could use the ISBN number since there are no two books with the same ISBN. For a table of authors, you would create a unique Author ID to identify each individual author.

object Models:
The data is stored in the form of objects.Which are structures called classes that display the data within.The fileds are instances of these classes.the object oriented structure has the ability to handle the graphics,videos.This structure is popular for multimedia web based applications.It was designed to work with object-oriented programming languages such as like java..

Normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics that could lead to a loss of data integrity.
The objectives of normalization:
Free the database of modification anomalies
Minimize redesign when extending the database structure
Make the data model more informative to users
Avoid bias towards any particular pattern of querying

In general, relational databases should be normalized to the "third normal form".

Process of Normalization:
There are two main steps of the normalization process: eliminate redundant data (for example, storing the same data in more than one table) and ensure data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored. 
Formal technique for analyzing a relation based on its primary key and functional dependencies between its attributes.Often executed as a series of steps.  Each step corresponds to a specific normal form, which has known properties.As normalization proceeds, relations become progressively more restricted (stronger) in format and also less vulnerable to update anomalies.

1 Normal form:
No Repeating Elements or Groups of Elements.
A relation in which intersection of each row and column contains one and only one value.
All key attributes get defined
No repeating groups in table
All attributes dependent on primary key

Second Normal form (2NF):
No Partial Dependencies on a Concatenated Key.
A relation that is in 1NF and every non-primary-key attribute is fully functionally dependent on the primary key (no partial dependency).

Third normal form(3NF):
No Dependencies on Non-Key Attributes.
A relation that is in 1NF and 2NF and in which no non-primary-key attribute is transitively dependent on the primary key. 

Sep 18, 2014

Andra Pradesh Cities and Towns Population

Andra Pradesh Cities and Towns Population

1    Visākhapatnam    1,728,128

2    Vijayawāda          1,476,931

3    Guntūr                  670,073

4    Nellore                 558,548

5    Kurnool                484,327

6    Rājahmundry        476,873

7    Tirupati               461,900

8    Kākināda             443,028

9    Kadapa               344,893

10    Anantapur         340,613

11)    Elūru               250,834   

12)    Vizianagaram   239,909

13)    Proddatūr        217,786   

14)    Nandyāl          211,424   

15)    Ādoni             184,625

16)    Madanapalle   180,180

17)    Chittoor         175,647

18)    Machilīpatnam       169,892

19)    Tenāli            164,937

20)    Chīrāla          162,471

21)    Hindupur       151,677

22)    Srīkākulam    147,015

23)    Bhīmavaram  146,961

24)    Guntakal      126,270

25)    Dharmavaram      121,874

26)    Gudivāda         118,167

27)    Narasaraopet      117,489

28)    Tādpatri             108,171

29)    Tādepalligūdem    104,032

30)    Chilakalūrupet      101,398
Telangana Cities and Towns Population

1)    Hyderābād    7,677,018

2)    Warangal         753,438

3)    Nizāmābād       311,152

4)    Karīmnagar      297,447

5)    Khammam       262,255

6)    Rāmagundam      252,308

7)    Mahbūbnagar      210,258

8)    Mancherial      163,552

9)    Nalgonda      154,326

10)    Ādilābād      139,383

11)    Kottagūdem      119,501

12)    Siddipet      114,091

13)    Suryāpet      106,805

14)    Miryalguda      104,918   

15)    Jagtiāl          103,930

Sep 16, 2014

UFT Online Training

UFT Online Training
(HP UFT 12.00)
(In Scripting Orientation and Project)

Weekend Program will commence on: 20th September 2014

Duration: 60 Hours 
Mobile: 91-7032677426

Fee: 350 US Dollars

a) Software Testing Principles & Practices (Manual Testing)
b) Programming Fundamentals
c) Database Fundamentals

Weekend Program (Saturday and Sunday)
Weekly 10 hours (6 Weekends)
Program will commence on: 20th September 2014
Timings: 5:30 PM to 10:30 PM (Indian standard Time) with 2 breaks

8:00 AM  to 1:00 PM    (Eastern Standard Time (EST))

7:00 AM to   12:00 PM  (Central  Standard Time (CST))

5:00 AM to   10:00 AM  (Pacific Standard Time (PST)) 

1:00 PM to 6:00 PM (UK Time)
Days of Training: 
September:  20, 21  , 27, 28
October:  4, 5  , 11, 12   , 18, 19,  25, 26

UFT (Formerly QTP) Syllabus

Module 1 (UFT Tool Fundamentals and Features) 
Lesson 1 (Overview on Test Automation)

  • Disadvantages of Manual Testing
  • Advantages of Test Automation
  • Disadvantages of Test Automation
  • Types of Test Tools
  • Coverage of Functional Testing
Lesson 2 (Basic Features of UFT Tool)
    •    UFT Product Information

    •    UFT Version History

    •    UFT Supporting Environments

    •    UFT Internal and External Add ins

    •    UFT License

    •    UFT IDE
Lesson 3 (Overview on UFT Tool)

  • Add in Manager
  • UFT Editor
  • Active Screen
  • Data Table
  • Debug Viewer
  • Errors Pane
  • Missing Resources
  • UFT Too Menus
    • File Menu
    • Edit Menu
    • View Menu
    • Search Menu
    • Design Menu
    • Record Menu
    • Run Menu
    • Resources Menu
    • ALM Menu
    • Tools Menu
    • Window Menu
    • Help Menu
 Lesson 4 (Software Test Process)

•    Test Planning

•    Test Design

•    Test Execution

•    Test Closure

Lesson 5 (UFT Test Process)

  • Test Planning
  • Generating Basic Tests
  • Enhancing Tests
  • Running and Debugging Tests
  • Analyzing Test Results
  • Reporting Defects
Lesson 6 (Recording and Running Tests)

  • Test Recording
  • Test Run / Execution
  • Recording Modes
    • Normal Recording
    • Analog Recording
    • Low Level Recording
    • Insight Recording
• Run Modes

• Advantages of Recording

• Disadvantages of Recording

Lesson 7 (Types of Objects in UFT)

   • Overview on Software Objects

   • Run-time Objects

   • Test Objects

   • Utility Objects

   • Automation Objects

Lesson 8 (Object Repository)

•    Local Object Repository

•    Shared Object Repository

•    Add Objects

•    Rename Objects

•    Delete Objects

•    Export Local Objects

•    Merge Repositories

•    Associate Shared Repositories

•    Load Shared Repositories

•    Map Objects in between Object Repository and AUT

•    Export Shared Objects to XML/ Import from XML

•    Define New Test Objects

Lesson 9 (Object Identification Configuration)

•    What is Object Identification Configuration?

•    Why Object Identification Configuration?

•    Normal Identification

•    Smart Identification

•    Ordinal Identifier

•    Globalize Object Identification Configuration

Lesson 10 (Prerequisites for Generating Tests)

•    Test Scenario

•    Navigation / Steps

•    Verification Points

•    Error handling

•    Input Data

•    Comments

•    Test Objects Information

•    Methods / Operations Information

Lesson 11 (Keyword Driven Methodology)
Generating Tests Manually

•    Create Shared Object Repositories

•    Associate Shared Object Repositories / Load Shared Object Repositories at Run-Time

•    Generate Tests or Test scripts

    o    Using Editor View

    o    Using Keyword View

    o    Using Step Generator

    o    By Drag and Drop Objects from Object Repository to Tool Editor

Lesson 12 (Types of Statements in UFT Test)

•    Test Object Statements

•    Utility Statements

•    Declaration Statements

•    Flow Control Statements

        • Conditional Statements

        • Loop Statements

•    VBScript Statements

•    Check point Statements

•    Output value statements

•    Action calls, Function Calls

Lesson 13 (Descriptive Programming)

•    What is Descriptive Programming?

•    Advantages of Descriptive Programming

•    Identifying Unique Properties for Objects

•    Static Programming

•    Handling Duplicate Objects

•    Handling Multiple instants of Application

•    Centralized maintenance of Objects

•    Dynamic Programming

Lesson 14 (Test Methods or Operations)

•    Activate Method

•    Click Method

•    Close Method

•    Set Method

•    Select Method

•    GetVisibleText Method

•    GetRoProperty Method

•    GetItemsCount Method

•    GetContent Method

•    WaitProperty Method

•    ChildObjects Method

•    Navigate Method

•    Sync Method

•    CaptureBitmap Method

Lesson 15 (Inserting Checkpoints)

•    Inserting Standard Checkpoint

•    Inserting Text Checkpoint

•    Inserting Text Area Checkpoint

•    Inserting Bitmap Checkpoint

•    Inserting Database Checkpoint

•    Inserting Accessibility Checkpoint

•    Inserting XML Checkpoint (From Application)

•    Inserting XML Checkpoint (From Resource)

•    Inserting File Content Checkpoint

•    Inserting Page Checkpoint

•    Inserting Image Checkpoint

•    Inserting Table Checkpoint

•    Disadvantages of Checkpoints

Lesson 16 (Inserting Output Values)

•    Inserting Standard Output value

•    Inserting Text Output value

•    Inserting Text Area Output value

•    Inserting Database Output value

•    Inserting XML (From Application) Output value

•    Inserting XML (From Resource) Output value

•    Inserting File Content Output value

Lesson 17 (Inserting Transaction Points)

•    Inserting Start and End Transaction Points

•    Timer Function

•    Defining Test Results

•    Transaction Points Vs Timer Function

Lesson 18 (Parameterization)

•    What is Parameterization?

•    Purpose of Parameterization?

•    Data driven Testing

•    How to Parameterize Tests?

•    Ways of Parameterization
    o    Generate and Pass values using Loop statements

    o    Dynamic Submission of Test Data

    o    Using Data Table

    o    Using Action Parameters

    o    Using Environment variables

    o    Using Dictionary Object

    o    Using file System, Excel and Database Objects

Lesson 19 (Actions)

•    What is Action?

•    Purpose of Actions

•    Types of Actions

•    Create New Action

•    Rename Actions

•    Delete Actions

•    Make an Action from Reusable to Non-Reusable and Vice versa

•    Call an existing Action

•    Copy Action

•    Action Parameters (Input / Output)

Lesson 20 (Synchronization)
•    What is Synchronization?

•    Why Synchronization?

•    When Synchronization is required?

•    How to Synchronize UFT and AUT

    o    Inserting wait Statement

    o    Inserting Synchronization Point

    o    Increasing Tool default Synchronization Time

    o    Using Exist Property
•    Select and Appropriate Method

•    Advantages of Wait Statement

Lesson 21 (Environment Variables)

•    Purpose of Environment Variables

•    Types of Environment Variables

•    Define Environment Variables

•    Associate Environment Variables file to UFT

•    Access Environment Variables

•    Edit / Delete Environment Variables

Lesson 22 (Debugging Tests)

•    What is Debugging?

•    When Debugging is Required?

•    How Debug?

•    VBScript Debug Commands and Breakpoint

•    Step by Step, At a Time, and Hybrid Test Executions

•    Debug Viewer, Watch variables, change values of Variables

Lesson 23 (Batch Testing)

•    Types of Test Run or Execution

•    Batch Testing

    o    Batch Testing using “Test Batch Runner” Tool

    o    Batch Testing using AOM Script

    o    Batch Testing using Driver Script
    o    Batch Testing using Quality Center Tool / ALM 

Lesson 24 (Recovery Scenarios)

•    What is Recovery Scenario?

•    Why Recovery Scenarios?

•    Trigger Events

•    Recovery Operations

•    Create New Recovery Scenarios

•    Associate Recovery Scenarios at Test level or Tool Level

•    Edit Recovery Scenarios (If required)

•    Delete Recovery Scenarios (If required)

Lesson 25 (UFT Tool Administration)

•    Test Settings Configuration

•    Tool Options and View Options Configuration

•    Object Identification Configuration

•    Virtual object Configuration

•    What is Globalize Tool Settings?

•    Why Globalize Tool Settings?

•    How to Globalize Tool Settings?

Module 2 (VBScript for UFT Test Automation)

Lesson 1 (Overview on VBScript)

•    Adding Comments

•    Basic Features of Visual basic Scripting Edition

•    Data Types

•    Declaring Constants

•    VBScript Variables

•    Operators

•    Conditional Statements

•    Loop Statements

•    Built-in Functions

•    User defined Functions

•    Regular Expressions

•    File System Operations

•    Working with Excel Application

•    Working with Word Application

•    Working with Databases

•    Dictionary Object

•    Error Handling

Lesson 2 (Adding Comments)

•    Purpose of Comments

•    Syntax for Adding Comments

•    Comment a block of Statements

•    Uncomment comment block

•    Usage of Comments in UFT Test Automation 

Lesson 3 (VBScript Data Types)

•    Implicit Declaration of Data Types

•    Check Data Sub Types

•    Convert Data from One Sub Type to another

Lesson 4 (VBScript Variables)

•    What is Variable?

•    Implicit and Explicit Declaration of Variables

•    Option Explicit Statement

•    Assigning values to Variables

•    Usage of variables

•    Naming Restrictions

•    Scope of Variables

•    Types of Variables

•    Array Variables

•    Dynamic Arrays

•    Dimensional Arrays

•    Assigning Series of Values to Array Variable

Lesson 5 (VBScript Constants)

•    Declaration of Constants

•    Built-in Constants

•    User defined Constants

Lesson 6 (VBScript Operators)

•    What is Operator?

•    Operator Precedence

•    Arithmetic Operators

•    Comparison Operators

•    Logical Operators 

Lesson 7 (VBScript Conditional Statements)

•    If Statements

•    Select Case Statements

•    Usage of Conditional Statements in UFT

•    Execute a Statement when condition is True (Simple If)

•    Execute a Block of Statements when condition is True

•    Execute a Block of Statements when condition is True otherwise execute another block of statements.

•    Decide among Several alternates (ElseIf)

•    Execute a block of statements when more than one condition is True (Nested If)

•    Decide among Several alternates using Select case statement

•    Single, Compound and Nested Conditions

•    Positive and Negative Conditions

•    Loops within Conditions and Vice Versa

Lesson 8 (VBScript Loop Statements)

•    Purpose of Loop Statements

•    For…Next Statement

•    Terminating For Loop

•    While…Wend Statement

•    Do While / Until …Loop Statement

•    Terminating Do Loop

•    For Each…Next Statement

•    Condition statements within Loops

•    Nested Loops

Lesson 9 (Built-in Functions)

•    Array Functions

•    String Functions

•    Date and Time Functions

•    Input/Output Functions

•    Conversion Functions

•    Math Functions

•    Miscellaneous Functions

•    Usage of Built-in Functions in UFT

Lesson 10 (User Defined Functions)

•    Types of User Defined Functions

•    Sub Procedures

•    Function Procedures

•    Internal and External Functions

•    Centralized maintenance Functions

•    Examples

    o    Sub Procedure with no Arguments

    o    Sub Procedure with Arguments

    o    Sub Procedure with Arguments and Verification points

    o    Function Procedure with returning value

    o    Function Procedure with returning multiple values

    o    Associating or Loading Function Library Files

    o    Calling a Function within the Function

Lesson 11 (Regular Expressions)

•    What is Regular Expression?

•    Usage of Regular Expressions in UFT Test Automation

•    Handling Dynamic Objects

•    Regular Expression Object

•    Search Operations using Regular Expression Object

Lesson 12 (File System Operations)

•    What is Computer File System?

•    Examples for File System Operations

•    How end user does File System Operations?

•    How to do automatic File System Operations?

•    Creating File System Object

•    Examples:

    o    Create a Folder (without manual interaction)

    o    Copy a Folder

    o    Delete a Folder

    o    Create a Text File or Flat File

    o    Copy a Text File

    o    Delete a Text File

    o    Create Text Stream Object

    o    Read data character by Character from a Text File

    o    Read Line by Line from a Text File

    o    Read entire content from a Text File

    o    Read data from a text file and perform Data Driven Testing

    o    Write Data continuously

    o    Write Data Line by Line

    o    Append Data

    o    Compare two text files by Size

    o    Compare two text files by Text

    o    Compare two text files by Binary Value (Exact match)

    o    Search Operations

    o    Checking existence of a Folder or Text File

Lesson 13 (Excel Application Operations)

•    Create Excel Application Object

•    Create Excel Work Book or File

•    Create Excel WorkBook and Excel WorkSheet objects

•    Read Data

•    Data Driven Testing by fetching Test Data from an Excel File

•    Write Data

•    Read and Write Data using same file

•    Compare Data (Exact Match)

•    Compare Data (Textual Comparison)

•    Compare Data (Many to Many Comparison)

•    Search for Data

•    Add / Remove new sheets to Existing excel file

•    Rename Sheets

Lesson 14 (Database Operations)

•    Create Database Connection Object

•    Create Database Recordset Object

•    Create Database Command Object

•    Create Provider for Database connectivity

•    Fetch entire Data from a database and perform Data driven Testing

•    Fetch range of Data from a database and perform Data driven Testing

•    Export Data from a Database to Excel file

•    Export Data from a Database to Text file

•    Export Data from Text file to Excel file

•    Export Data from Excel file to Text file

•    Export Data from Excel to Database

•    Export data from Text file to Database

Lesson 15 (Dictionary Object)

•    Create Dictionary Object

•    Methods and properties in Dictionary Object Model

•    Usage of Dictionary object in UFT Test Automation

Lesson 16 (Error handling in VbScript)

•    Using Exist property and Conditional statements

•    Using Some Built-in Functions

•    Using Exit Statement

•    Using Option explicit Statement

•    Using On Error Resume Next Statement

Lesson 17 (VbScript Coding Conventions)

•    Constant naming Conventions

•    Variable Naming Conventions

•    Object Naming Conventions

•    Code Commenting Conventions

•    Formatting Your Code

Module 3 (UFT Scripting)

Lesson 1 (Windows Scripting)

•    Handling GUI Objects

•    Object State validation

•    Input Domain Coverage

•    Output Domain Coverage

•    Database Testing

•    Other Examples

Lesson 2 (Web Scripting)  

•    Checking Links and other Web Objects

•    Forms validation

•    Output Domain Coverage

•    Cookies Testing

•    Web Script Examples

Module 4 (Automation Framework Design and Implementation)

Lesson 1 (Test Planning)

•    Get Environment Details and select appropriate Add ins

•    Analyzing the AUT (Application under Test) in terms of Object identification)

•    Select Areas or Test Cases for Automation

•    Test Estimations

•    Tool Settings Configuration and Globalize

•    Automation Framework Implementation (Optional)

Lesson 2 (Automation Framework Theory)

•    What is Automation Framework?

•    Why Automation Framework?

•    Files to be created and used in Test Automation using UFT tool

•    Tasks to be preformed in Test Automation using UFT tool

•    Types of Automation Framework

•    List of Keywords in UFT and VBScript

•    Key Elements of Automation Framework

•    Create Folder Structure

Lesson 3 (Automation Framework Implementation Practical)

•    Create Folder structure to store Automation Resources

•    Create Automation Resources and store into corresponding folders

•    Create Organizer Spread sheet

•    Generate Driver script

•    Create Initialization Script

•    Implementing Basic Framework

•    Implementing Keyword Driven Framework

•    Implementing Hybrid Framework