Jump to content

pytest workflow?


Jason Porath

Recommended Posts

Hi all - 

I'm developing some in-house ftrack convenience scripts, to help empower junior devs to work faster, and I would like to get a testing suite for them going with pytest. Has anyone done anything similar?

I'm fairly new to pytest, and while I believe I understand the fundamentals, I'm a bit unsure as to how to best set up an ftrack fixture as a data source. The thoughts that have occurred:

  • Connect to a pared-down "mock" ftrack database that can live easily on disk, like a sqlite database <-- my ideal solution
  • Connect to a test ftrack server and, in the fixture code, manually set up all the ftrack data before I run the test, each time <-- seems like a ton of work, bound to miss edge cases
  • Connect to a test ftrack server and somehow prevent the session from doing any commits <-- seems dangerous, unsure of feasibility

Does anyone have any insight on this?

Link to comment
Share on other sites

Hi Jason, welcome to the forum (from one former DWA-er to another). I hope to hear some solutions from folks in the trenches, but I can give an overview of what I know.

Internally we use a couple things for testing--automated tests with pytest use a combination of mocking (the API's all JSON blobs back and forth, so it's pretty easy to mock things like reading server information, object schemas, etc.) and a disposable ftrack installation in a container. I'm not involved in the build process of that one, so can't really say how much info we have in the db when we spin up the container. The other less-formal testing approach is with a heavier container we use for product demos as well--it has a number of real-world datasets and associated media, so it takes a while to pull. We use Docker and Kubernetes internally, but at least one customer has adopted a similar approach to standing up a temporary server using Vagrant, I believe. For populating local test and hacking instances, I use a combination of Python for setting up Projects and populating some data, and straight SQL for some of the settings that are tedious/impossible to set otherwise.

Link to comment
Share on other sites

  • 1 month later...
  • 2 months later...

@Jason Porath  I found quite fine running running unit and functional tests on test projects right on production server. It is quite easy to construct any context with ftrack_api and release it after tests finished. In worst case you will got a bunch of test entities on test project that no one cares.

so feel free to:

@pytest.fixture
def session():
    return ftrack_api.Session()
 
@pytest.fixture
def context(requestsession😞
    context = session.create('Folder', {'parent_id'...'project_id'...'name'...})
    def finalizer():
        session.rollback()
        session.delete(context)
        session.commit()
    request.addfinalizer(finalizer)
    return context

The better approach is to running tests on staging ftrack server that should have relative fresh database snapshot from production, that is much more complex. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...