STPCon session submission. What do you think?

An Automated Test is Real Code!

Session Information

An automated test is real code! We should follow good programming practices even when automating tests. Use Session-Based Test Management to explore you system under test. Model your automation design before you get started. Write test first unit tests to help build your page objects. After your page objects are built, make them portable to use in different test suites such as functional, conformance, and cucumber tests. Organize your continuous integration jobs to tell you pinpointed information about your system under test.

Selenium Fury is on RelishApp.com

Checkout Selenium Fury on relishapp.com. I found the site when I was searching for Rspec2 documentation. The site organizes your cucumber features into a really nice presentation so that your features can be consumed as documentation. It has been a great tool for learning more about Rspec 2. To illustrate the difference this is what it is like to view Features on github and here is the same feature on rellishapp.

Selenium Fury 5.5 released

I was able to restructure the gem and remove the dependency on Rspec. I have tested successfully with Rspec 1 and 2.

Lightning Talk From Selenium Conference

Checkout my lightning talk on SeleniumFury the page object factory for Ruby.
Lightning Talk From Selenium Conference

Selenium Fury 5.2 with custom generators has been released.

The latest release of Selenium Fury is ready for your install.  Checkout the new cucumber features at the project home  https://github.com/scottcsims/SeleniumFury. This version includes extended page element recognition, custom generators, and the git hub project includes support for bundler and rvm.

Install the new release:   gem install selenium_fury

https://rubygems.org/gems/selenium_fury

 

What I liked about “Perfect Software And Other Illusions About Testing” by Gerald M. Weinberg

Perfect Software And Other Illusions About Testing
Perfect Software And Other Illusions About Testing

I recently finished reading “Perfect Software and other illusions about testing” by Gerald Weinberg.  I wanted to elaborate and reflect on a few points I enjoyed about the book .  I recently attended the Software Test Professionals Conference and Gerald Weinberg was there signing his book. I had not heard of him at that point, but now I will be at the front of the line for the next book signing.

The first chapter was a home run, it was titled “Why do we bother testing?”  I found that this statement shed new light on an old subject for me.

“Common mistake 5: Believing testing can improve a product: Testing gathers information about a product; it does not fix things it finds that are wrong.  Testing does not improve a product; the improving is done by people fixing the bugs that testing has uncovered.  Often when managers say, “Testing takes too long” what they should be saying is, “Fixing the bugs is the product takes too long, ” — a different cost category. Make sure your accounting for effort and time under the correct cost category.”

Chapter four is titled  “Whats the difference between testing and debugging”.  I found two interesting answers about locating faults and pinpointing that I was not expecting.  I remember thinking at one point in my testing carrer that if I could find a defect and then locate in code the source of the problem that I would be considered an amazing tester. Who should pinpoint failures?

“Common mistake 4:  Demanding that testers pinpoint every failure:  Testers can help developers with this job, if there time has been scheduled for it, but it is ultimately a developer responsibility.  At least that’s what I’ve seen work best in the long run. “

“Common mistake 5:  Demanding that testers locate every fault:  This is totally a developer’s job, because developers have the needed skills.  Testers generally don’t have these skills, though at times, they may have useful hints.”

In chapter thirteen different strategies are discussed for determining significance.  Significance is the importance attached to the bug by the person who gets to decide what to do about that bug.  I liked these four categories for determining the significance of an issue from a testing point of view

Level 0: This issue is blocking other testing.
Level 1:  Our product cannot be used if this issue isn’t resolved
Level 3:  The value of our product will be significantly reduced if this issue isn’t resolved.
Level 4:  This issue will be important only if there are large numbers of similar issues when the product is shipped.

From the high level project view, think about your project and how important fixing is.   I learned that fixing is linked to an emotional value that is given to every fixable work item.  It is possible that every member of the team might assign a different emotional value to ever fixable work item.  In this case, how do you manage what gets fixed in what order if all members of your team members have different values assigned to your fixable work items? I like the summary of chapter thirteen to answer this question.

“Our emotions carry information about how important things are. If we pay attention to emotions, listen, and address important matters before unimportant matters, we’ll be doing the best we can with the data we have.”

In conclusion I highly suggest this book for anyone developing or testing software.  It will also be helpful to anyone making decisions about how software is released or built.

The Selenium Fury gem is ready! Page object factory for Selenium and Ruby.

I have been working on converting our page object factory to open source for a few weeks. Now it is time to launch the  HomeAway sponsored open source project under the Apache 2.0 license.  It is a furiously quick way to implement test automation.  I am planning to add more configuration options in the future.

This project started when I had to test a page with 300+ check boxes and I did not want to enter them by hand.  I used the page object generator to build a page of ruby variables with Selenium locators. Everything was great until some number of the check boxes changed their ids and my tests started failing.  I needed  a quick way to find out how many changed so I could update the locators by hand or regenerate the page.  This is where the validators came in.  I used Ruby’s support of reflection to open a class, navigate to the url of the page and use Selenium to validate the locators and return a list of missing locators on the page.  It worked perfectly for my page of 300+ check boxes.  I had over 40 that changed I quickly regenerated the page.

Install with:

  • gem install selenium_fury

Checkout the home page and examples at https://github.com/scottcsims/SeleniumFury

Thanks to HomeAway for sponsoring this project.

Use rspec partial mock to write a quick executing example.

I wanted to test an error message in my class that processes test results and sends them to Rally.  The problem was that I did not want to actually create a rally object and run the method to parse the results, just to test the error message.
This is what my class looks like.

class AutomationRun
  def send_results_to_rally
    @rally = RallyUtils.new(ENV['RALLY_WORKSPACE'])
    push_test_case_results
    raise("Test Case Results Were Not Parsed Correctly") if test_case_results.empty?
    test_case_results.each do |result|
      @rally.update_test_case_result(:tc_id =>result.test_case_number_from_spec_report, :build =>result.build_number, :verdict =>result.verdict, :notes => result.note.format_note)
    end
  end
end

I want to skip these steps to test the error message:

  • don’t instantiate RallyUtils.new  we don’t need @rally for the test
  • don’t parse the results so don’t call parse_test_case_results

This is what my test code looks like:

 it "should raise an exception if there are not test case results" do
    RallyUtils.stub!(:new).and_return(nil)
    automation_run=AutomationRun.new
    automation_run.should_receive(:push_test_case_results).and_return([])
    begin
    automation_run.send_results_to_rally
    rescue Exception=>e
    end
    e.message.should == "Test Case Results Were Not Parsed Correctly"
  end
  1. We use a rspec stub to intercept the new call to the RallyUtils class and return nil.  We could have also returned a mock object if we wanted to execute a call on the rally class. Now we are not creating a connection to Rally
  2. Now it is time to new up or AutomationRun class and setup our mock expectation to skip running the real push_test_case_results method.
  3. automation_run.should_receive tells us to intercept the symbol or method push_test_case_results
  4. and_return([]) will just return an empty array
  5. We setup a begin rescue block to capture the Exception object returned by our expected error message
  6. Now we call send_results_to_rally to get our error message
  7. Lastly we verify that the message attribute for our error is Test Case Results Were Not Parsed Correctly

So that is it, I created a partial mock  in that we are using a real automation run object but we stubbed out one of the methods.  We also stubbed out the new call to rally so we don’t create a connection.  Now I can focus my test on exactly the function I wanted to test, the error message.

Going to Software Test Professionals Conference next week.

I am looking forward to my first trip to Las Vegas http://www.stpcon.com/. I will have a chance to meet with other selenium grid users and test out my strategies in test automation patterns. I will be attending all of the topics on agile automation

Deep Test 2.0 prerelease is available.

Deep test allows you to run ruby rspec tests in parallel by providing a spec task that you include in your rake file. The 2.0 prerelease version of deep tests offers support for rspec 1.1.12. You can install it buy running: gem install deep_test_pre. Require it in your rake file like this:

gem “deep_test_pre”, “=2.0”
require “deep_test”

gem “deep_test_pre”, “=2.0”
require “deep_test”

There currently is a bug that occurs if you run Test::Unit tests with the deep test spec runner.  The tests will all run twice but are reported once.