Automation Testing with Pytest

isn’t it.

Test functions are pretty subtle, and doing only what they intend to do.

Setting up and tearing down instantiating and closing Wallet are being taken care by fixture wallet .

Not only it is helping to write reusable piece of code, it also adds the essence of data separation.

If you look carefully, wallet amount is a piece of test data supplied from outside of the test logic, and not hard-coded inside test function.

@pytest.

mark.

parametrize(‘wallet’, [(10,)], indirect=True)In more controlled environment, you can have a test data file e.

g.

test-data.

ini in your repository, and a wrapper that reads it, and your test function can invoke another interface of wrapper for reading test data.

However it is recommended to make your fixtures part of special conftest.

py file.

This is a special file in pytest that lets a test discover global fixtures.

But, I have a test cases to execute against many different data set!No worries, pytest has a cool feature for parameterizing your fixture.

Let’s take a look at it with an example.

Let’s say your product exposes the CLI interface to manage it locally.

Also your product has lots of default parameters getting set on startup, and you want to validate default values of all such parameters.

One can think of writing one test case for each of these settings, but with pytest its much easier@pytest.

mark.

parametrize(“setting_name, setting_value”, [(‘qdb_mem_usage’, ‘low’),(‘report_crashes’, ‘yes’),(‘stop_download_on_hang’, ‘no’),(‘stop_download_on_disconnect’, ‘no’),(‘reduce_connections_on_congestion’, ‘no’),(‘global.

max_web_users’, ‘1024’),(‘global.

max_downloads’, ‘5’),(‘use_kernel_congestion_detection’, ‘no’),(‘log_type’, ‘normal’),(‘no_signature_check’, ‘no’),(‘disable_xmlrpc’, ‘no’),(‘disable_ntp’, ‘yes’),(‘ssl_mode’, ‘tls_1_2’),])def test_settings_defaults(self, setting_name, setting_value): assert product_shell.

run_command(setting_name) == self.

”The current value for ’{0}’ is ’{1}’.

”.

format(setting_name, setting_value), ‘The {} default should be {}’.

format(preference_name, preference_value)Cool, isn’t it!, You just wrote 13 test cases (each for different setting_value), and in future if you add a new setting into your product, all you need to do is, add one more tuple on top ????.

How does it integrate with UI test with selenium and API testsWell, your product can have multiple interfaces.

CLI — Like we discussed above.

Similarly GUI and API.

Before deploying your software piece, it is important to test all of them.

In an enterprise software where multiple components are interdependent and coupled, change in one part may affect the rests.

Remember pytest is just a framework to facilitate “testing”, not specific type of testing.

So you can build your GUI tests with selenium or API tests with Python’s requests library for example, and run it with pytest.

For example, at a high level this could be your test repository structure.

As you can see above, this gives good separation of components.

apiobjects: Good place for creating wrappers for invoking API end-points.

You can have a BaseAPIObject and a derived class to match your requirements.

helpers: Write your helper methodslib: Library files, which can be used by different components e.

g.

your fixtures in conftest, pageobjects etc.

pageobjects: PageObjects design pattern can be used for creating your classes of different GUI pages.

We at tenable use Webium , which is a Page Object pattern implementation library for Python.

suites: You can write your pylint code verification suites here, it would be helpful to get confidence on your code quality.

tests: You can have categorized test directories based on the flavor of your tests.

It makes it easy to manage and explore your tests.

Well this is just for reference, repository structure and dependencies can be laid out to match your needs.

I have plenty of test cases, want to run them in parallel ????You may have plenty of test cases in your test suite, and there may be times when you would like to run test cases in parallel and reduce overall test execution time.

Pytest offers an awesome plugin to run tests in parallel named pytest-xdist, which extends pytest with some unique execution modes.

Install this plugin using pippip install pytest-xdistLet’s explore it quickly with an example.

I have an automation test repository CloudApp for my GUI tests using selenium.

Also, it is constantly growing with new test cases, and now have hundreds of tests.

What I would like to do is to run them in parallel, and trim down my test execution time.

In terminal just type pytest in the project root folder / tests folder.

This will execute all tests.

pytest -s -v -n=2pytest-xdist with tests running in parallelThis can also help you to run tests on multiple browsers in parallel.

ReportingPytest comes with builtin support create result files which can be read by Jenkins, Bamboo or other Continuous integration servers, use this invocation:pytest test/file/path — junitxml=pathThis can generate great XML style output, which can be interpreted by many CI systems parsers.

ConclusionPytest’s popularity is growing every year.

Also, it has a wide community support, which lets you gain access to a lot of extensions e.

g.

pytest-django, which helps you to write tests for your Django web apps integration.

Remember, pytest supports running unittest test cases, so if you are using unittest , pytest is worth considering for future.

????Resourceshttps://docs.

pytest.

org/en/latest/http://pythontesting.

net/framework/pytest/pytest-introduction/.

. More details

Leave a Reply