Automate Browser Actions (DOM & UI)
Telerik Testing Framework supports three types of automation actions:
Browser control: Commands that make the browser perform an action. These actions are exposed using the Browser object. Each open window or frame is controlled by its own Browser object. This object lets you do things like Navigate to a URL, Go back, Go forward, Refresh, Wait until ready, Scroll window.
DOM automation: Actions are executed directly against the DOM of the application being tested. All DOM actions are exposed using an 'Actions' object that is a property of each instantiated 'Browser' object. Some of the more common actions include Set Text, Check, Click, Select Drop Down, Scroll to Visible, Invoke Event, Invoke Script, Wait for Element.
Pure UI automation: Actions are performed as true mouse/keyboard actions that simulate real user interactions. These actions are exposed using a 'Desktop' object that is a property of the 'Manager' object.
The code sample below demonstrates browser control and direct DOM actions.
Let's look at some pure UI automation to simulate some of the DOM actions above. Before we look at the sample code, there are few things to note regarding pure UI automation using Telerik Testing Framework:
When using pure UI actions, you need to make sure that an element is actually visible within the browser client area. In large pages, the browser needs to scroll to make the element visible. Telerik Testing Framework provides a 'ScrollToVisible' action that you should invoke if your page is large and you are not sure if a specific element is visible. This feature is also available out of coded solution in the elements menu while recording.
The 'Desktop' object exposes 'Mouse' & 'Keyboard' objects to represent mouse input vs. keyboard input. Mouse actions usually take a coordinate on the screen to target. Each element in the DomTree has a GetRectangle() method that will retrieve the current X/Y & height/width of an element as it is currently rendered on the screen taking into account any browser scrolling. You can use these rectangle coordinates by passing them in to the mouse action methods.
Given that pure UI automation simply uses mouse and/or keyboard events, you can use these actions not only to automate elements within a browser, but also to automate simple Win32 windows. The Native Win32 Window Support topic covers this in more detail.
The code below shows how to set the text for the 'input1' text box using pure UI automation:
Advanced Scenarios
The 'Mouse' object offers a rich set of APIs to perform some complex UI automation including operations like DragAndDrop that is extremely difficult to perform using direct DOM automation. Such types of operations are becoming more popular with the rich content applications that are emerging on the internet. For example, you can use this DragAndDrop support to automate portals that include web part regions that can be dragged and repositioned like Microsoft's SharePoint server or Google's customized homepage.
In addition to that, the 'Mouse' object supports offset coordinates which are very helpful in automating drag/drop actions. For example, if you are attempting to drag a window from one location to another, you need to click-drag its title bar to a destination X/Y coordinate. You can easily choose which part of the window to click-drag using the simplified coordinate offset. This feature is also very suitable for automating image maps.
For example, if you are attempting to click the red (x) on this window to drag it to another location:
You would write something like this: