Test automation artifacts have the potential to create unique testing opportunities against their systems under test. In this session, a conceptual analysis of a large-scale test suite’s results beyond the traditional pass/fail metric will unveil new automation capabilities. By programmatically examining test result data, attendees will learn about efficiencies discovered including exploratory UI testing with WebDriver, test suite consolidation, and test error classification within existing automation solutions. **Details:** The test results generated by repeated execution of test automation scripts are a valuable and often ignored resource of information about a system under test. During this session, I will share approaches discovered while leading a large scale, multi-state test automation team looking to increase their test coverage. The ideas presented utilize both test results and client/server error tracking to show how new, adaptable automation can be generated that allows for more overall testing to be accomplished. * The session will show concepts where by utilizing web element information logged within a test result can create programmatically generated variations of a test that act as exploratory UI testing within WebDriver. * The session will show concepts where test results are used to collate and organize nearly identical tests together into testing super-sets; an approach we have labeled as Signature Based Automation. * The session will show concepts where test results can be programmatically inspected for related data features and failure points in order to provide test maintenance recommendations.