• Content Count

  • Joined

  • Last visited

  1. Hey, So I've been playing around with the events part of the api and noticed that the entities that are propagated mostly return the "entityType" "task", even if the event (eg: rename) is performed on a Shot, Folder, Task etc... The issue I'm having with this is I need to get the entity object so I can change it based on the event, but the subsequent "session.get" method takes the specific "Task", "Shot" or the like, not the "entityType", "task". Is there a way to get the entity based purely on it's ID, and not it's type? Essentially I'm working out how to prevent renames on anything in the project hierarchy (doesn't matter what Type it is) that has an asset linked to it, but I'm not wanting to hard code the types if possible. Thanks, Mark
  2. Hey, So I'm currently looking into how to retrieve a project hierarchy efficiently for both data transmission (size of response), number of queries (for use over high latency, > 300ms connections) and returned data structure (reducing python time for reformatting data). I was wondering what the best method would be in ftrack? Currently we use a 5 column hierarchy with data requests split between the first 3 and last 2 columns. As in we fetch the entire hierarchy for the first 3 columns (folders), and then clicking on the 3rd column item will fetch the next two for that item. We've found that this creates a fairly reasonable balance between the number of queries required to get to the end asset (when each request will take upwards of 300ms just for the data transmission) and db load on our current system. Also the latency is due to an offsite use of our asset management system that we need to cater for. Most latency is sub 5ms, but we have users that will be on greater than 300ms due to geographic distance. Essentially we're trying to see how we can get ftrack to replace this, and don't want to go backwards in the speed of retrieving project hierarchies, and in turn the final asset. Anyway, an tips would be highly appreciated. Thanks, Mark