Jump to content

Martin Pengelly-Phillips

Members
  • Posts

    116
  • Joined

  • Last visited

  • Days Won

    15

Posts posted by Martin Pengelly-Phillips

  1. 18 hours ago, postmodern said:

    Thanks for you reply! But I mean that I need latest versions for all assets in the task with one query session.

    Not possible at present to do in one query. 

    Are you wanting to do it in one query for performance reasons? 

    The API does support batching calls under the hood, but it doesn't seem to be implemented for queries yet. There are some TODOS in the code though...

    def _query(self, expression):
        '''Execute *query* and return (records, metadata).
    
        Records will be a list of entities retrieved via the query and metadata
        a dictionary of accompanying information about the result set.
    
        '''
        # TODO: Actually support batching several queries together.
        # TODO: Should batches have unique ids to match them up later.
        batch = [{
            'action': 'query',
            'expression': expression
        }]
  2. Quote

    It makes sens for us as using multiple session it means creating multiple cache files which increase the number of queries to the DB.

    Note that you could also write your own cache interface to connect to a shared cache if you want to reduce DB hits. For example, we run a Redis cache per site and created a RedisCache implementation as a subclass of ftrack_api.cache.Cache.

     

  3. > @Martin Pengelly-Phillips: I am interested how are you using the session. Are you creating a new session for each query / commit?

    No, we don't create a new session for each query or commit, as that would likely be redundant and slow. However, we often have a session (or two) in background threads in order to allow a main UI thread to continue unblocked. We also often run multiple sessions in background threads for event processing in order to avoid event deadlock.

  4. For reference, we have implemented the concept of a 'huddle' for event plugins that groups published events and listeners around unique discoverable huddle identifiers. 

    So you can have multiple instances of an application such as Maya running for one user and be able to scope events to one particular instance. You can also discover all the available huddles dynamically and allow the user to interactively select between them. Seems to work well so far and greatly simplifies the scoping logic.

  5. We are improving our setup and have been thinking about better cache invalidation.

    Currently, we have to reproduce, through introspection, server side logic in order to invalidate linked entities efficiently. We have been discussing with the ftrack team having more information exposed through the API to make this less error prone. An alternative that was discussed was having dumb cache invalidation listeners and ftrack would simply emit all the appropriate update events for any action (e.g if a user deleted, invalidate 'author' reference on specific entities). However, as some of the logic occurs in the database, the ftrack team were not sure they would be able to emit the events correctly.

  6. It would be very useful to be able to have custom entity link fields in order to reduce confusion and boost productivity.

    By this I mean the ability to add a field (custom attribute) that is a link to another type of entity in the system and displays as such in the spreadsheet etc. We would want both single entity and multi entity support I think.

    A real world example of this is adding custom fields for assigning 'Reviewers' or 'Stakeholders' for tasks etc. It is confusing to just add them to Assignee so an extra field would help our workflow here.

    Thoughts?

×
×
  • Create New...