Jump to content

Steve Petterborg

Administrators
  • Posts

    81
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Steve Petterborg

  1. import humanize actions = { proj: { 'action': 'storage_usage', 'entity_type': 'Project', 'entity_id': proj['id'], } for proj in session.query('Project') } results = session.call( list(actions.values()) ) width = max(len(proj['name']) for proj in actions) for proj, result in zip(actions, results): print(f"{proj['name']:{width}} {humanize.naturalsize(result['data'])}") Here's an example of letting the server perform the calculation. results is an ordered list of dictionaries, which is why I use zip() in this example to label the returned values with the corresponding Project names. We'll follow up on you other questions.
  2. Not every user or global API key is able to access all projects. In particular, if a Project is set to private access, then the relevant User or global API key would need to be explicitly granted access to that Project. Otherwise I would expect it to be filtered out of the results.
  3. Hi Honda, Here is a small snippet showing how to retrieve the real id and type from tempdata, and an example of the contents. I believe that the type attribute will be what I call legacy API form (e.g. "show" and "task" instead of "Project" and "TypedContext" as the new API). results = session.call([{ 'action': '_get_tempdata', 'id': tempdata_id, }]) # results = [[ # { # "type": "assetversion", # "id": "ab580b37-70a8-45c9-8008-09aa4e71b6b7" # }, # { # "type": "assetversion", # "id": "6cefaf1c-f8e6-47df-bbe8-1e1801b1d571" # } # ]] Three variations on how to resolve paths for those AssetVersions. The first two assume that you've defined "componentName" to be something like "main" to match the value in system settings, for the particular Asset type that you're using. However, the third example shows how to query the Asset for the proper Component name. # URLs will contain the session's API key. for result in results[0]: if result['type'] != 'assetversion': continue playable_component = session.query( f'''Component where version_id is "{result['id']}" and name is {componentName}''').one() print(server_location.get_url(playable_component)) # URLs will be signed and expire after some time. for result in results[0]: component_location = session.query( f'''select url from ComponentLocation where component.name is "{componentName}"''' f''' and component.version_id is "{result['id']}"''').one() print(component_location['url']['value']) # This shows the resolver service used by the ftrack web UI. for result in results[0]: av = session.query( f'''select asset.type.component, components.name from AssetVersion where id is "{result['id']}"''' ).one() comps = { comp['id']: comp for comp in av['components'] if comp['name'] == av['asset']['type']['component'] } for comp_id in comps: event = ftrack_api.event.base.Event( topic='ftrack.location.request-resolve', data={ 'componentId': comp_id, } ) session.event_hub.publish( event=event, synchronous=False, on_reply=lambda event: print(event['data']['path']), ) session.event_hub.wait()
  4. We've updated the docs to clarify that the Event Hub is not supported under Node. Please let us know if you encountered this error in some other environment. https://ftrack-javascript-api.readthedocs.io/en/latest/handling_events.html#node-support
  5. Hi Peter, what's your use case? Just caching objects (or some other server state) locally or something else?
  6. I think all of us in the (virtual) office were surprised that this was possible. It's definitely not fully supported (we make some assumptions that Tasks are always leaves, not internal nodes in the graph/hierarchy) and I wouldn't suggest it. Maybe in ftrack 5 though!
  7. I'm going to start by responding to everything in order, then circle back to ask some questions about your workflow. Formatting issues aside (it should probably say "Task" somewhere in there), what that's communicating is that Asset parents are always a non-Task Context object. The design is that Tasks (or TypedContexts) which are siblings can or do represent different sorts of work on the same group of Assets. As you found, an Asset cannot have multiple AssetVersions with the same version numbers. Asset uniqueness is determined by context_id, name and type_id, so you can effectively have an Asset per-Task, as long as you have different types for each one. Now, certain tools (may) interpret Asset types as meaningful, so be prepared to run into issues if you choose to go down that road. We're making use of https://docs.sqlalchemy.org/en/14/orm/backref.html to populate the "assets" attribute. It's keyed off of the Asset's context(_id) attr, so in the current implementation, Tasks will never have that attribute populated, but any other Context may. The idiomatic ftrack way of accessing roughly the same thing would be task['parent']['assets'] or querying for AssetVersions which have the task attribute set to the Task in question. Back to workflow questions: why the need to organize and version in this way? If the tasks aren't sharing Assets, could they then be put under different parents (e.g. Folders)?
  8. Hi Mark, I'm not sure you can do that with a projection, at least at the moment. When using an attribute like that as a condition, you need to cast the value to a specific entity type. We mention it in passing here. I'll have to check with someone who's more experienced with that part of the code base to see whether it's (intended to be) possible to do it the way you're thinking about. However, it appears you're trying to get information on a users assigned Tasks (or TypedContexts, but probably just Tasks, right?), correct? You could run the query using a subquery (mentioned here): projections = ', '.join(( '_link', 'name', 'parent.name', 'status.name', 'parent.parent.name', )) assigned_tasks = session.query( f'select {projections}' ' from TypedContext where id in' f' (select context_id from Appointment where resource.email is "{user_email}")' ).all() For the second question, regarding "_link": No, you cannot receive only part of the response back, but the first part of _link is Project information, so you could query for that specifically. You just need "project.name" since you'll always get the primary key(s), which is usually just "id". Alternately, if all you care about are names, you could change the projections to: projections = ', '.join(( 'ancestors.name', 'status.name', ))
  9. Hi Andrea, If you queried for the Asset, you would find that it still exists. What you deleted on the web UI was only an AssetVersion, and we do not also clean up Assets once their last AssetVersion is deleted. So, one solution is to use session.ensure() instead of session.create(). For example: asset = session.ensure( 'Asset', data={ 'name': 'Asset', 'context_id': folder['id'], 'type_id': asset_type['id'], } ) At the moment, ensure() creates a query naively, so you must not use entity relations (i.e. set "context_id" and "type_id" instead of "context" and "type"). https://bitbucket.org/ftrack/ftrack-python-api/src/6c9c8a82f98a89cc7bae0bd2331729842b0a89e5/source/ftrack_api/session.py#lines-584
  10. Hi Vlad, The Python API has a convenience method on ProjectSchema objects project_schema = session.query( 'ProjectSchema where name is "{}"'.format( "foo" ) ).one() # This is optional, and only needed if you have per-type status overrides in your schema. task_type = session.query( 'Type where name is "{}"'.format( "bar" ) ) task_statuses = project_schema.get_statuses( schema='Task', type_id=task_type['id'], ) https://bitbucket.org/ftrack/ftrack-python-api/src/master/source/ftrack_api/entity/project_schema.py The function is implemented in the linked file, and you could get some inspiration from there if you wanted to reimplement it yourself instead.
  11. There's not an obvious (to me) way to override that, but if you used the legacy API to make your own TempData object, you can specify the expiry yourself. It sounds clunky, but you could duplicate a playlist created in the usual way, and make the copy long-lasting. I'll see whether STHLM has a better idea.
  12. As far as I know, the particular UI in your screenshot cannot be customized like that. To a large extent, you can get similar functionality in the Overview, using a cross-project view to show all the tasks to which you're assigned. That interface is built on our newer web framework and does allow showing certain attributes and complex filters. For customizing the status-column mapping on the Task board, see the entry in system settings detailed here: https://help.ftrack.com/en/articles/1040455-setting-up-task-boards
  13. I Ethan, I believe the underlying entry in the db lasts for ten minutes. There's a process which removes them after the expiry, but I don't know how often that runs.
  14. Hi Toby, It's not quite a tutorial, but we've got a couple examples of other structures: https://bitbucket.org/ftrack/ftrack-recipes/src/master/python/events/customise_structure/ https://bitbucket.org/ftrack/ftrack-recipes/src/master/python/plugins/template-structure/ There's not necessarily a ton to replace: https://bitbucket.org/ftrack/ftrack-python-api/src/master/source/ftrack_api/structure/standard.py
  15. Hi Jason, welcome to the forum (from one former DWA-er to another). I hope to hear some solutions from folks in the trenches, but I can give an overview of what I know. Internally we use a couple things for testing--automated tests with pytest use a combination of mocking (the API's all JSON blobs back and forth, so it's pretty easy to mock things like reading server information, object schemas, etc.) and a disposable ftrack installation in a container. I'm not involved in the build process of that one, so can't really say how much info we have in the db when we spin up the container. The other less-formal testing approach is with a heavier container we use for product demos as well--it has a number of real-world datasets and associated media, so it takes a while to pull. We use Docker and Kubernetes internally, but at least one customer has adopted a similar approach to standing up a temporary server using Vagrant, I believe. For populating local test and hacking instances, I use a combination of Python for setting up Projects and populating some data, and straight SQL for some of the settings that are tedious/impossible to set otherwise.
  16. Hey Peter, What you're running in to is the fact that populate() constructs a query using that attribute string, and we don't support typecasting in a projection. see https://bitbucket.org/ftrack/ftrack-python-api/src/23f582cd146e71f871ba8a17da30c0ad831de836/source/ftrack_api/session.py#lines-1070 We do support passing a list, tuple or QueryResult, so my workaround would be something like the populate line in this snippet. The rest is just included to set up my example / test. shot = session.query('select children from Shot where children is_not None').first() session.populate(shot['children'][:], 'status') with session.auto_populating(False): print(shot['children'][0]['name']) print(shot['children'][0]['status']) I suppose the root cause of all this is that children maps to Contexts, which can include Projects, which themselves do not have statuses.
  17. Correct, generally. For update events, you'd have to construct
  18. That's how mine looks in the browser too. That particular endpoint is only used by the legacy API for XML-RPC. Two angles of attack are either adding some debug output to what Deadline is running (and seeing whether it sets os.environ['LOGNAME'] in case your login username is different than your ftrack username) or avoiding Deadline for now. Both their plugins and our API which they vendor are editable as .py files, though do consider backing them up before altering them. Alternately, make sure ftrack Connect works (as the current version utilizes the legacy API as well as the newer one) and consider a small stand-alone script just to make sure you can connect with the legacy API and your API key.
  19. Hi Yating, The Deadline integration is created/owned by Thinkbox/AWS. They recently switched from the legacy API (which uses XMLRPC) to the newer Python API. I believe the switch happened in 10.1 and has been stable since 10.1.3.6. Is it possible for you to upgrade versions? In either case, I don't recall where the Deadline integration is getting an ftrack username. Even when using a "global" API key, an API session but be initialized with the username of an enabled User. I believe we fall back to LOGNAME if nothing is set explicitly in the constructor. For the 404 error, can you access that URL with curl from the command line? Or from your browser?
  20. Hi Jen, We use Thumbor to dynamically create the thumbnails, so I think you'd have to edit the URL to include a fill filter. Unfortunately we don't really support that -- we can point to a different Thumbor server than the default, but the part where we format the string is hardcoded to just have the "fit-in" directive. Maybe there's a way to force Thumbor to always apply a filter? That may very well not be an option (quick browse of the Thumbor docs didn't turn up anything). I could see our exposing a config var along the lines of "extra_thumbor_url_bits".
  21. You're self-hosting, right? There are a couple things you'd have to do to make the server handle it "natively". One is create a setting, ftrack.image_conversion_formats, a comma-delimited list of supported formats. The other is update or replace the image-encoder service to handle .ai files. We use the ImageMagick tool, convert, which should support .ai files with some added dependencies. I have not personally tried the above steps.
  22. "show" and "task" are legacy identifiers, roughly corresponding to Project and TypedContext in the new API. session.get('Context', id) should return the most-specific class available for the given it. In my demo content, I have the following parent types: appointment asset assetversion asset_version dependency list note review_session review_session_object review_session_object_status show task One of the original designers might have to confirm, but I believe that we only persist to the db Events which trigger a notification/will be rendered in the UI. AFIAK, creating a user does not do this.
  23. Hi Jen, How are you uploading these files, and can you run the encoding client-side somehow? We have an example of using ftrack Connect to publish image sequences which are rendered as a movie by ffmpeg and then uploaded as the ftrackreview-mp4 component. If you're not using Connect, you could do something similar with an event listener watching for new publishes. https://bitbucket.org/ftrack/ftrack-recipes/src/master/python/events/encode_image_sequence/
  24. You can do a subquery (bottom of the page here https://help.ftrack.com/en/articles/1040506-query-syntax) which would look like 'Event where created_at > (select created_at from Event where id is "{}")' As an aside, also take a look at the section on "projections". The two lines you pasted actually do three requests in the case where the "created_at" attribute wasn't yet cached. To avoid that automatic call, you can request all the attributes you'll need all at once like so: event = session.query('select created_at from Event where id is "{}"'.format(event_id)).first() events = session.query('Event where created_at > "{}"'.format(event['created_at']))
  25. Hi Johannes, Towards the bottom of this page (search for "subclass") we show some examples of casting a result to a single type: https://help.ftrack.com/en/articles/1040506-query-syntax In your case, I believe you query string would be: "AssetVersion where task.assignments.resource[User].memberships.group.name is {}"
×
×
  • Create New...