Automated Testing Detail Test PlanAutomated Testing DTP Overview 
This
 Automated Testing Detail Test Plan (ADTP) will identify the specific 
tests that are to be performed to ensure the quality of the delivered 
product. System/Integration Test ensures the product functions as 
designed and all parts work together. This ADTP will cover information 
for Automated testing during the System/Integration Phase of the project
 and will map to the specification or requirements documentation for the
 project. This mapping is done in conjunction with the Traceability 
Matrix document, that should be completed along with the ADTP and is 
referenced in this document. This
 ADTP refers to the specific portion of the product known as PRODUCT 
NAME. It provides clear entry and exit criteria, and roles and 
responsibilities of the Automated Test Team are identified such that 
they can execute the test. The objectives of this ADTP are: 
- Describe the test to be executed. 
- Identify and assign a unique number for each specific test. 
- Describe the scope of the testing. 
- List what is and is not to be tested. 
- Describe the test approach detailing methods, techniques, and tools. 
- Outline the Test Design including: 
- Functionality to be tested. 
- Test Case Definition. 
- Test Data Requirements. 
- Identify all specifications for preparation. 
- Identify issues and risks. 
- Identify actual test cases. 
- Document the design point 
Test Identification 
This
 ADTP is intended to provide information for System/Integration Testing 
for the PRODUCT NAME module of the PROJECT NAME. The test effort may be 
referred to by its PROJECT REQUEST (PR) number and its project title for
 tracking and monitoring of the testing progress.   
Test Purpose and Objectives 
Automated
 testing during the System/Integration Phase as referenced in this 
document is intended to ensure that the product functions as designed 
directly from customer requirements. The testing goal is to identify the
 quality of the structure, content, accuracy and consistency, some 
response times and latency, and performance of the application as 
defined in the project documentation.  
Assumptions, Constraints, and Exclusions 
Factors
 which may affect the automated testing effort, and may increase the 
risk associated with the success of the test include: 
- Completion of development of front-end processes 
- Completion of design and construction of new processes 
- Completion of modifications to the local database 
- Movement or implementation of the solution to the appropriate testing or production environment 
- Stability of the testing or production environment 
- Load Discipline 
- Maintaining recording standards and automated processes for the project 
- Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid 
Entry Criteria 
The
 ADTP is complete, excluding actual test results. The ADTP has been 
signed-off by appropriate sponsor representatives indicating consent of 
the plan for testing. The Problem Tracking and Reporting tool is ready 
for use. The Change Management and Configuration Management rules are in
 place.
The environment for testing, including databases, application programs, 
and connectivity has been defined, constructed, and verified.  
Exit Criteria  
In
 establishing the exit/acceptance criteria for the Automated Testing 
during the System/Integration Phase of the test, the Project Completion 
Criteria defined in the Project Definition Document (PDD) should provide
 a starting point. All automated test cases have been executed as 
documented. The percent of successfully executed test cases met the 
defined criteria. Recommended criteria: No Critical or High severity 
problem logs remain open and all Medium problem logs have agreed upon 
action plans; successful execution of the application to validate 
accuracy of data, interfaces, and connectivity. 
Pass/Fail Criteria 
The
 results for each test must be compared to the pre-defined expected test
 results, as documented in the ADTP (and DTP where applicable). The 
actual results are logged in the Test Case detail within the Detail Test
 Plan if those results differ from the expected results. If the actual 
results match the expected results, the Test Case can be marked as a 
passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in 
the ADTP or Detail Test Plan (manual test plan). A test case fails if 
the actual results produced by its execution do not match the expected 
results. The source of failure may be the application under test, the 
test case, the expected results, or the data in the test environment. 
Test case failures must be logged regardless of the source of the 
failure. Any bugs or problems will be logged in the DEFECT TRACKING 
TOOL.
The responsible application resource corrects the problem and tests the 
repair. Once this is complete, the tester who generated the problem log 
is notified, and the item is re-tested. If the retest is successful, the
 status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been 
identified, the problem log status is updated and the problem 
description is updated with the new findings. It is then returned to the
 responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are 
assigned by the test group and are not modifiable by any other group. 
The following standard Severity Codes to be used for identifying defects
 are: Table 1 Severity Codes 
| Severity Code Number | Severity Code Name | Description | 
| 1. | Critical | Automated tests cannot proceed further within applicable test case (no work around) | 
| 2. | High | The test case or procedure can be completed, but produces incorrect output when valid information is input. | 
| 3. | Medium | The
 test case or procedure can be completed and produces correct output 
when valid information is input, but produces incorrect output when 
invalid information is input.(e.g. no special characters are allowed as 
part of specifications but when a special character is a part of the 
test and the system allows a user to continue, this is a medium 
severity) | 
| 4. | Low | All
 test cases and procedures passed as written, but there could be minor 
revisions, cosmetic changes, etc. These defects do not impact functional
 execution of system | 
The use of the standard Severity Codes produces four major benefits: 
- Standard
 Severity Codes are objective and can be easily and accurately assigned 
by those executing the test. Time spent in discussion about the 
appropriate priority of a problem is minimized. 
- Standard
 Severity Code definitions allow an independent assessment of the risk 
to the on-schedule delivery of a product that functions as documented in
 the requirements and design documents. 
- Use
 of the standard Severity Codes works to ensure consistency in the 
requirements, design, and test documentation with an appropriate level 
of detail throughout. 
- Use of the standard Severity Codes promote effective escalation procedures. 
Test Scope 
The
 scope of testing identifies the items which will be tested and the 
items which will not be tested within the System/Integration Phase of 
testing. Items to be tested by Automation (PRODUCT NAME …)
Items not to be tested by Automation(PRODUCT NAME …)  
Test Approach 
Description of Approach
The mission of Automated Testing is the process of identifying 
recordable test cases through all appropriate paths of a website, 
creating repeatable scripts, interpreting test results, and reporting to
 project management. For the Generic Project, the automation test team 
will focus on positive testing and will complement the manual testing 
undergone on the system. Automated test results will be generated, 
formatted into reports and provided on a consistent basis to Generic 
project management.
System testing is the process of testing an integrated hardware and 
software system to verify that the system meets its specified 
requirements. It verifies proper execution of the entire set of 
application components including interfaces to other applications. 
Project teams of developers and test analysts are responsible for 
ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all 
components of the system are working together properly. This testing 
focuses on how well all parts of the web site hold together, whether 
inside and outside the website are working, and whether all parts of the
 website are connected. Project teams of developers and test analyst are
 responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to 
identify the quality of the structure, content, accuracy and 
consistency, response time and latency, and performance of the 
application, test cases are included which focus on determining how well
 this quality goal is accomplished.
Content testing focuses on whether the content of the pages match what 
is supposed to be there, whether key phrases exist continually in 
changeable pages, and whether the pages maintain quality content from 
version to version.
Accuracy and consistency testing focuses on whether today’s copies of 
the pages download the same as yesterday’s, and whether the data 
presented to the user is accurate enough.
Response time and latency testing focuses on whether the web site server
 responds to a browser request within certain performance parameters, 
whether response time after a SUBMIT is acceptable, or whether parts of a
 site are so slow that the user discontinues working. Although 
Loadrunner provides the full measure of this test, there will be various
 AD HOC time measurements within certain Winrunner Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies 
by time of day or by load and usage, and whether performance is adequate
 for the application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action. Test Definition This
 section addresses the development of the components required for the 
specific test. Included are identification of the functionality to be 
tested by automation, the associated automated test cases and scenarios.
 The development of the test components parallels, with a slight lag, 
the development of the associated product components.  
Test Functionality Definition (Requirements Testing) 
The
 functionality to be automated tested is listed in the Traceability 
Matrix, attached as an appendix. For each function to undergo testing by
 automation, the Test Case is identified. Automated Test Cases are given
 unique identifiers to enable cross-referencing between related test 
documentation, and to facilitate tracking and monitoring the test 
progress.
As much information as is available is entered into the Traceability 
Matrix in order to complete the scope of automation during the 
System/Integration Phase of the test.   Test Case Definition (Test Design) Each
 Automated Test Case is designed to validate the associated 
functionality of a stated requirement. Automated Test Cases include 
unambiguous input and output specifications. This information is 
documented within the Automated Test Cases in Appendix 8.5 of this 
ADTP.  Test Data Requirements 
The
 automated test data required for the test is described below. The test 
data will be used to populate the data bases and/or files used by the 
application/system during the System/Integration Phase of the test. In 
most cases, the automated test data will be built by the OTS Database 
Analyst or OTS Automation Test Analyst.  
Automation Recording Standards 
Initial Automation Testing Rules for the Generic Project:
1. Ability to move through all paths within the applicable system
2. Ability to identify and record the GUI Maps for all associated test items in each path
3. Specific times for loading into automation test environment
4. Code frozen between loads into automation test environment
5. Minimum acceptable system stability 
Winrunner Menu Settings 
1. Default recording mode is CONTEXT SENSITIVE
2. Record owner-drawn buttons as OBJECT
3. Maximum length of list item to record is 253 characters
4. Delay for Window Synchronization is 1000 milliseconds (unless 
Loadrunner is operating in same environment and then must increase 
appropriately)
5. Timeout for checkpoints and CS statements is 1000 milliseconds
6. Timeout for Text Recognition is 500 milliseconds
7. All scripts will stop and start on the main menu page
8. All recorded scripts will remain short; Debugging is easier. However,
 the entire script, or portions of scripts, can be added together for 
long runs once the environment has greater stability.  
Winrunner Script Naming Conventions 
1.
 All automated scripts will begin with GE abbreviation representing the 
Generic Project and be filed under the Winrunner on LAB11 W 
Drive/Generic/Scripts Folder.
2. GE will be followed by the Product Path name in lower case: air, htl, car
3. After the automated scripts have been debugged, a date for the script
 will be attached: 0710 for July 10. When significant improvements have 
been made to the same script, the date will be changed.
4. As incremental improvements have been made to an automated script, 
version numbers will be attached signifying the script with the latest 
improvements: eg. XX0710.1 XX0710.2 The .2 version is the most 
up-to-date  
Winrunner GUIMAP Naming Conventions 
1.
 All Generic GUI Maps will begin with XX followed by the area of test. 
Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map
 represents all membership and main menu concerns. XXlogin GUI Map 
represents all XX login concerns.
2. As there can only be one GUI Map for each Object, etc on the site, 
they are under constant revision when the site is undergoing frequent 
program loads.   Winrunner Result Naming Conventions 
1. When beginning a script, allow default res## name to be filed
2. After a successful run of a script where the results will be used 
toward a report, move file to results and rename: XX for project name, 
res for Test Results, 0718 for the date the script was run, your 
initials and the original default number for the script. Eg. 
XXres0718jr.
1  Winrunner Report Naming Conventions 
1.
 When the accumulation of test result(s) files for the day are 
formulated, and the statistics are confirmed, a report will be filed 
that is accessible by upper management. The daily Report file will be as
 follows: XXdaily0718 XX for project name, daily for daily report, and 
0718 for the date the report was issued.
2. When the accumulation of test result(s) files for the week are 
formulated, and the statistics are confirmed, a report will be filed 
that is accessible by upper management. The weekly Report file will be 
as follows: XXweek0718 XX for project name, week for weekly report, and 
0718 for the date the report was issued. 
Winrunner Script, Result and Report Repository 
1.
 LAB 11, located within the XX Test Lab, will house the original 
Winrunner Script, Results and Report Repository for automated testing 
within the Generic Project. WRITE access is granted Winrunner 
Technicians and READ ONLY access is granted those who are authorized to 
run scripts but not make any improvements. This is meant to maintain the
 purity of each script version.
2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing.
3. Project file folders for the Generic Project represent the initial 
structure of project folders utilizing automated testing. As our 
automation becomes more advanced, the structure will spread to other 
appropriate areas.
4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found.
5. All automated scripts generated for each project will be filed under 
Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder 
ARCHIVE SCRIPTS as necessary
6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder.
7. All automated test results are filed under the individual Script 
Folder after each script run. Results will be referred to and reports 
generated utilizing applicable statistics. Automated Test Results 
referenced by reports sent to management will be kept under the 
Winrunner on LAB11 W Drive/Generic/Results Folder. Before work on 
evaluating a new set of test results is begun, all prior results are 
placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results 
Folder. This will ensure all reported statistics are available for 
closer scrutiny when required.
8. All reports generated from automated scripts and sent to upper 
management will be filed under Winrunner on LAB11 W 
Drive/Generic/Reports Folder  
Test Preparation Specifications 
Test Environment
Environment for Automated Test
Automated Test environment is indicated below. Existing dependencies are entered in comments.  
| Environment | Test System | Comments | 
| Test System/Integration Test (SIT) | Cert | Access via http://xxxxx/xxxxx | 
| Production | Production | Access via http:// www.xxxxxx.xxx | 
| Other (specify) | Development | Individual Test Environments | 
Hardware for Automated Test
The following is a list of the hardware needed to create production like environment: 
| Manufacturer | Device Type | 
| Various | Personal
 Computer (486 or Higher) with monitor & required peripherals; with 
connectivity to internet test/production environments. Must be enabled 
to ADDITIONAL REQUIREMENTS. | 
Software
The following is a list of the software needed to create a production like environment: 
| Software | Version (if applicable) | Programmer Support | 
| Netscape Navigator | ZZZ or higher | - | 
| Internet Explorer | ZZZ or higher | - | 
Test Team Roles and Responsibilities 
Test Team Roles and Responsibilities 
| Role | Responsibilities | Name | 
| COMPANY NAME Sponsor | Approve project development, handle major issues related to project development, and approve development resources | Name, Phone | 
| XXX Sponsor | Signature approval of the project, handle major issues | Name, Phone | 
| XXX Project Manager | Ensures all aspects of the project are being addressed from CUSTOMERS’ point of view | Name, Phone | 
| COMPANY NAME Development Manager | Manage
 the overall development of project, including obtaining resources, 
handling major issues, approving technical design and overall timeline, 
delivering the overall product according to the Partner Requirements | Name, Phone | 
| COMPANY NAME Project Manager | Provide
 PDD (Project Definition Document), project plan, status reports, track 
project development status, manage changes and issues | Name, Phone | 
| COMPANY NAME Technical Lead | Provide
 Technical guidance to the Development Team and ensure that overall 
Development is proceeding in the best technical direction | Name, Phone | 
| COMPANY NAME Back End Services Manager | Develop and deliver the necessary Business Services to support the PROJECT NAME | Name, Phone | 
| COMPANY NAME Infrastructure Manager | Provide PROJECT NAME development certification, production infrastructure, service level agreement, and testing resources | Name, Phone | 
| COMPANY NAME Test Coordinator | Develops
 ADTP and Detail Test Plans, tests changes, logs incidents identified 
during testing, coordinates testing effort of test team for project | Name, Phone | 
| COMPANY NAME Tracker Coordinator/ Tester | Tracks
 XXX’s in DEFECT TRACKING TOOL. Reviews new XXX’s for duplicates, 
completeness and assigns to Module Tech Leads for fix. Produces status 
documents as needed. Tests changes, logs incidents identified during 
testing. | Name, Phone | 
| COMPANY NAME Automation Enginneer | Tests changes, logs incidents identified during testing | Name, Phone | 
Test Team Training Requirements 
Automation Training Requirements 
| Training Requirement | Training Approach | Target Date for Completion | Roles/Resources to be Trained | 
| . | . | . | . | 
| . | . | . | . | 
Automation Test Preparation 
- Write and receive approval of the ADTP from Generic Project management 
- Manually test the cases in the plan to make sure they actually work before recording repeatable scripts 
- Record appropriate scripts and file them according to the naming conventions described within this document 
- Initial
 order of automated script runs will be to load GUI Maps through a 
STARTUP script. After the successful run of this script, scripts testing
 all paths will be kicked off. Once an appropriate number of PNR’s are 
generated, GenericCancel scripts will be used to automatically take the 
inventory out of the test profile and system environment. During the 
automation test period, requests for testing of certain functions can be
 accommodated as necessary as long as these functions have the ability 
to be tested by automation. 
- The
 ability to use Generic Automation will be READ ONLY for anyone outside 
of the test group. Of course, this is required to maintain the pristine 
condition of master scripts on our data repository. 
- Generic
 Test Group will conduct automated tests under the rules specified in 
our agreement for use of the Winrunner tool marketed by Mercury 
Interactive. 
- Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management. 
Test Issues and Risks
 Issues
The table below lists known project testing issues to date. Upon 
sign-off of the ADTP and Detail Test Plan, this table will not be 
maintained, and these issues and all new issues will be tracked through 
the Issue Management System, as indicated in the projects approved Issue
 Management Process 
| Issue | Impact | Target Date for Resolution | Owner | 
| COMPANY NAME test team is not in possession of market data regarding what browsers are most in use in CUSTOMER target market. | Testing may not cover some browsers used by CLIENT customers | Beginning of Automated Testing during System and Integration Test Phase | CUSTOMER TO PROVIDE | 
| OTHER | . | . | . | 
Risks Risks
The table below identifies any high impact or highly probable risks that
 may impact the success of the Automated testing process. Risk Assessment Matrix 
| Risk Area | Potential Impact | Likelihood of Occurrence | Difficulty of Timely Detection | Overall Threat(H, M, L) | 
| 1. Unstable Environment | Delayed Start | HISTORY OF PROJECT | Immediately | . | 
| 2. Quality of Unit Testing | Greater delays taken by automated scripts | Dependent upon quality standards of development group | Immediately | . | 
| 3. Browser Issues | Intermittent Delays | Dependent upon browser version | Immediately | . | 
Risk Management Plan 
| Risk Area | Preventative Action | Contingency Plan Action | Trigger | Owner | 
| 1. Meet with Environment Group | . | . | . | . | 
| 2. Meet with Development Group | . | . | . | . | 
| 3. | . | . | . | . | 
Traceability Matrix The
 purpose of the Traceability Matrix is to identify all business 
requirements and to trace each requirement through the project’s 
completion.
Each business requirement must have an established priority as outlined in the Business Requirements Document.
They are:
Essential – Must satisfy the requirement to be accepted by the customer.
Useful – Value -added requirement influencing the customer’s decision.
Nice-to-have – Cosmetic non-essential condition, makes product more appealing.
The Traceability Matrix will change and evolve throughout the entire 
project life cycle. The requirement definitions, priority, functional 
requirements, and automated test cases are subject to change and new 
requirements can be added. However, if new requirements are added or 
existing requirements are modified after the Business Requirements 
document and this document have been approved, the changes will be 
subject to the change management process.
The Traceability Matrix for this project will be developed and 
maintained by the test coordinator. At the completion of the matrix 
definition and the project, a copy will be added to the project 
notebook.  Functional Areas of Traceability Matrix 
| # | Functional Area | Priority | 
| B1 | Pond | E | 
| B2 | River | E | 
| B3 | Lake | U | 
| B4 | Sea | E | 
| B5 | Ocean | E | 
| B6 | Misc | U | 
| B7 | Modify | E | 
| L1 | Language | E | 
| EE1 | End-to-End Testing | EE | 
Legend:
B = Order Engine
L = Language
N = Nice to have
EE = End-to-End
E = Essential
U = Useful
Definitions for Use in Testing Test Requirement
A scenario is a prose statement of requirements for the test. Just as 
there are high level and detailed requirements in application 
development, there is a need to provide detailed requirements in the 
test development area.  Test Case
A test case is a transaction or list of transactions that will satisfy 
the requirements statement in a test scenario. The test case must 
contain the actual entries to be executed as well as the expected 
results, i.e., what a user entering the commands would see as a system 
response.   Test Procedure
Test procedures define the activities necessary to execute a test case 
or set of cases. Test procedures may contain information regarding the 
loading of data and executables into the test system, directions 
regarding sign in procedures, instructions regarding the handling of 
test results, and anything else required to successfully conduct the 
test.  
Automated Test Cases 
NAME OF FUNCTION Test Case 
Project Name/Number     |Generic Project / Project Request #|Date      |             
|Test Case Description   |Check all drop down boxes, fill in |         
boxes and pop-up windows operate   |Build #   |   
according to requirements on the   
main Pond web page.                | Run #    |
|Function / Module       | B1.1                              |Execution |             || Under Test             |Retry #  |             
|Test Requirement #      |                                   |Case #    |AB1.1.1(A for Automated)  
|Written by |Goals          |  Verify that Pond module functions as required                      
|Setup for Test |  Access browser, Go to .. .                                         
|Pre-conditions | Login with name and password.  When arrive at Generic Main Menu…  
|Step|Action|Expected Results                 |Pass/Fail|Actual Results if Step Fails 
|    |Go to |From the Generic Main Menu,|     |click on the Pond gif and go to  |   
Pond |Pond web page.  Once on the Pond |  | and  |web page, check all drop down    |                                       |boxes for appropriate information| |(eg Time.7a, 8a in 1 hour |  |increments), fill in boxes       
|(remarks allows alpha and numeric||but no other special characters),|
|and pop up windows (eg.  Privacy.| |Ensure it is retrieved, has |  |correct verbage and closes).     
 Each automation project team needs write up an automation standards document stating the following: 
- The installation configurations of the automation tool. 
- How the client machines environment will be set up 
- Where the network repositories, and manual test plans documents are located. 
- Identify what the drive letter is that all client machines must map to. 
- How the automation tool will be configured. 
- Identify what Servers and Databases the automation will run against. 
- Any naming standards that the test procedures, test cases and test plans will follow. 
- Any recording standards and scripting standards that all scripts must follow. 
- Describe what components of the product that will be tested.} 
Installation Configuration 
| Install Step: | Selection: | Completed: | 
| Installations Components | Full |  | 
| Destination Directory | C:\sqa6 |  | 
| Type Of Repository | Microsoft Access |  | 
| Scripting Language | SQA Basic only |  | 
| Test Station Name | Your PC Name |  | 
| DLL messages | Overlay all DLL’s the system prompts for. Robot will not run without its own DLL’s. |  | 
Client Machines Configuration 
| Configuration Item | Setting: | Notes: | 
| Lotus Notes | Shut down lotus notes before using robot. | This will prevent mail notification messages from interrupting your scripts and it will allow robot to have more memory. | 
| Close all applications | Close down all applications down (except SQA robot recorder and the application you are testing) | This will free up memory on the PC. | 
| Shut down printing | Select printer window from start menu Select File -> Server Properties Select Advance tab Un-check notify check box |  | 
| Shut down printing Network | Bring up dos prompt Select Z drive Type CASTOFF |  | 
| Turn off Screensavers | Select NONE or change it to 90 minutes |  | 
| Display Settings for PC | Set in Control Panel display application Colors – 256 Font Size – small Desktop 800 X 600 pixels |  | 
| Map a Network drive to {LETTER} | Bring up explorer and map a network drive to here. |  | 
Repository Creation 
| Item | Information | 
| Repository Name |  | 
| Location |  | 
| Mapped Drive Letter |  | 
| Project Name |  | 
| Users set up for Project | Admin – no password | 
| Sbh files used in projects scripts |  | 
Client Setup Options for the SQA Robot tool 
| Option Window | Option | Selection | 
| Recording | ID list selections by | Contents | 
|  | ID Menu selections by | Text | 
|  | Record unsupported mouse drags as | Mouse click if within object | 
|  | Window positions | Record Object as text Auto record window size | 
|  | While Recording | Put Robot in background | 
| Playback | Test Procedure Control | Delay Between :5000 milliseconds | 
|  | Partial Window Caption | On Each window search | 
|  | Caption Matching options | Check – Match reverse captions Ignore file extensions Ignore Parenthesis | 
| Test Log | Test log Management | Output Playback results to test log All details | 
|  | Update SQA repository | View test log after playback | 
|  | Test Log Data | Specify Test Log Info at Playback | 
| Unexpected Window | Detect | Check | 
|  | Capture | Check | 
|  | Playback response | Select pushbutton with focus | 
|  | On Failure to remove | Abort playback | 
| Wait States | Wait Pos/Neg Region | Retry – 4 Timeout after 90 | 
|  | Automatic wait | Retry – 2 Timeout after 120 | 
|  | Keystroke option | Playback delay 100 millsec Check record delay after enter key | 
| Error Recovery | On Script command Failure | Abort Playback | 
|  | On test case failure | Continue Execution | 
|  | SQA trap | Check all but last 2 | 
| Object Recognition | Do not change |  | 
| Object Data Test Definitions | Do not change |  | 
| Editor | Leave with defaults |  | 
| Preferences | Leave with defaults |  | 
Identify what Servers and Databases the automation will run against.
This {Project name} will use the following Servers:
{Add servers}
On these Servers it will be using the following Databases:
{Add databases} 
Naming standards for test procedures, cases and plans
The naming standards for this project are:   Recording standards and scripting standards
In
 order to ensure that scripts are compatible on the various clients and 
run with the minimum maintenance the following recording standards have 
been set for all scripts recorded.1. Use assisting scripts to open and close applications and activity windows.
2. Use global constants to pass data into scripts and between scripts.
3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible.
4. Each test procedure should have a manual test plan associated with it.
5. Do not Save in the test procedure unless it is absolutely necessary, 
this will prevent the need to write numerous clean up scripts.
6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.
7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys.
8. When navigating through a window use the tab and arrow keys instead 
of using a mouse, this will make maintenance of scripts due to UI 
changes easier in the future.
9. Create a template header file called testproc.tpl. This file will 
insert template header information on the top of all scripts recorded. 
This template area can be used for modification tracking and commenting 
on the script.
10. Comment all major selections or events in the script. This will make debugging easier.
11. Make sure that you maximize all MDI main windows in login initial scripts.
12. When recording make sure you begin and end your scripts in the same 
position. Ex. On the platform browser always start your script opening 
the browser tree and selecting your activity (this will ensure that the 
activity window will always be in the same position), likewise always 
end your scripts with collapsing the browser tree.  
Describe what components of the product that will be tested.
This project will test the following components:
The objective is to: