tokejepsen Posted November 29, 2016 Report Posted November 29, 2016 Hey, I'm wrapping my head around locations atm, so there might concepts I don't understand. I'm trying to setup a location, just standard structure to begin with. What I don't understand is the workflow for getting the disk prefix. Since Disks are considered "legacy" how are you supposed to get a disk prefix with locations only? My currently plan for a workflow is to associate a Disk with a project, so I can get the disk prefix from there.
Mattias Lagergren Posted November 29, 2016 Report Posted November 29, 2016 As of now we have not fully decided where to go with the Disk entities in ftrack. They are no longer used out of the box for new ftrack installations - since our recommended location setup is through the Centralised storage scenario where the disks are not used, and a single mount point is assumed. They are however accessible through the ftrack-python-api and the projects do have a reference to the disk so it could still be used with a custom location setup.
tokejepsen Posted November 29, 2016 Author Report Posted November 29, 2016 8 minutes ago, Mattias Lagergren said: since our recommended location setup is through the Centralised storage scenario where the disks are not used, and a single mount point is assumed. I was wondering how this actually works. When I setup a Centralised storage scenario, I get a location setup but no disks setup, so how does the location know about the mount point? Could see anything when querying the location with the api.
Mattias Lagergren Posted November 29, 2016 Report Posted November 29, 2016 10 minutes ago, tokejepsen said: I was wondering how this actually works. When I setup a Centralised storage scenario, I get a location setup but no disks setup, so how does the location know about the mount point? Could see anything when querying the location with the api. It stores the mount points in a json dump configuration for the storage scenario setting so it works independently of the disks. This is leading to some confusion and something we need to solve and make more consistent.
tokejepsen Posted November 29, 2016 Author Report Posted November 29, 2016 3 minutes ago, Mattias Lagergren said: It stores the mount points in a json dump configuration for the storage scenario setting so it works independently of the disks. This is leading to some confusion and something we need to solve and make more consistent. Ahh, yes I see it now. Shame you can't do this per location, and associate some metadata to it. Currently it seems like locations are a categorization layer that doesn't actually have any effect on the final destination of the data. For example I can configure two locations that use a disk accessor to output to the same destination. And because you can alter the accessor of the location, you can't really be certain a location will have the same destination for the data. I know this is a framework, so its up to the individual developers to be organised but it would be nice to hear what the intended workflow is for using locations.
Mattias Lagergren Posted November 30, 2016 Report Posted November 30, 2016 23 hours ago, tokejepsen said: I know this is a framework, so its up to the individual developers to be organised but it would be nice to hear what the intended workflow is for using locations. As you say, it is a framework and there is a lot of possibilities within that realm. I've tried to elaborate a little on different solutions depending on what "use-case" you try to solve. Basic use-case The absolutely basic use-case with a Studio working from one place, with a single NAS that they want to use - the recommended way is to have a storage scenario and one mount point. Custom structure There are currently no options to change the structure / path of what is published when using the centralised storage scenario. To overcome this issue one can setup custom location plugin with a custom structure. Multi-site If the studio works across multiple sites they may not have the same NAS mounted on all facilities. In this scenario each facility may have their own custom location plugin mapped to an "ftrack location". Another area to explore here is the cross-site-syncing. Multi-disk In more multi-disk scenarios you currently have to resort to building your own custom location plugins. A ~best practice but not absolutely necessary is to have one "ftrack location" representing one storage. This may become difficult if you want to publish to different storages depending on project. Two possibilities: InternalResourceTransformer: The Connect location (now deprecated but still working) solves this by utilising an InternalResourceTransformer (you can see this in the legacy api in the __init__ file) and a disk prefix set to ''. When reading and writing the resource identifier to the server the disk path is applied according to setting on the Project. ProxyLocation: This is not something I've done myself but it is possible to setup a Location plugin that does not map to a "ftrack location" - but rather proxies it to the correct location plugin depending on the component being added. I think Lorenzo at efestolab has a solution for this.
tokejepsen Posted November 30, 2016 Author Report Posted November 30, 2016 2 hours ago, Mattias Lagergren said: InternalResourceTransformer: The Connect location (now deprecated but still working) solves this by utilising an InternalResourceTransformer (you can see this in the legacy api in the __init__ file) and a disk prefix set to ''. When reading and writing the resource identifier to the server the disk path is applied according to setting on the Project. I tried mirroring this outside of ftrack-connect with just the ftrack_api, and the resource identifiers are set correctly, but the data still doesn't go to the right place. I might be misunderstanding the usage of the resource identifier transformer, but here is there code; class ResourceIdentifierTransformer(object): def __init__(self, session): self.session = session super(ResourceIdentifierTransformer, self).__init__() def encode(self, resource_identifier, context=None): return self.transform_resource_identifier(resource_identifier, context) def decode(self, resource_identifier, context=None): return self.transform_resource_identifier(resource_identifier, context) def transform_resource_identifier(self, resource_identifier, context): parents = [] for item in context["component"]["version"]['link'][:-1]: parents.append(session.get(item['type'], item['id'])) project = parents[0] system = platform.system().lower() if system != "windows": system = "unix" return os.path.join( project["disk"][system], project["root"], resource_identifier ).replace("\\", "/") # Assign using ID structure. location.structure = ftrack_api.structure.standard.StandardStructure() location.accessor = ftrack_api.accessor.disk.DiskAccessor(prefix="") location.resource_identifier_transformer = ResourceIdentifierTransformer(session) BTW: Why can't we use Python syntax highlight on the forums?
Mattias Lagergren Posted December 1, 2016 Report Posted December 1, 2016 19 hours ago, tokejepsen said: I tried mirroring this outside of ftrack-connect with just the ftrack_api, and the resource identifiers are set correctly, but the data still doesn't go to the right place. I might be misunderstanding the usage of the resource identifier transformer, but here is there code; Just to clarify decode and encode. "decode" is called when the resource_identifier has been retrieved from the server; location.get_resource_identifier(component). "encode" is called before persisting the resource_identifier to the server. In your case you would only want to use "decode". However, when browsing the code ftrack-python-api I can see that necessary data such as context is not passed to the resource identifier transformer decode function. I will have a look at this to see if the fix is as straightforward as I hope. Using the proxy location approach may be the better solution here at the moment.
tokejepsen Posted December 1, 2016 Author Report Posted December 1, 2016 11 minutes ago, Mattias Lagergren said: However, when browsing the code ftrack-python-api I can see that necessary data such as context is not passed to the resource identifier transformer decode function I was able to print the context from the decode function, and get the component so I didn't get any errors. The data just didn't go to where the resource identifier says it would.
Mattias Lagergren Posted December 1, 2016 Report Posted December 1, 2016 4 minutes ago, tokejepsen said: I was able to print the context from the decode function, and get the component so I didn't get any errors. The data just didn't go to where the resource identifier says it would. And you're sure that it is not when called by "encode"? The "decode" function does not receive the context: https://bitbucket.org/ftrack/ftrack-python-api/src/28a17e9f54b516369ea9f1a999d8fcda41a705b8/source/ftrack_api/entity/location.py?at=master&fileviewer=file-view-default#location.py-515 Anyways, some suggestions on changes: * Add the disk prefix in the structure plugin rather than "encode". This should hopefully put the data in the correct place and will save an absolute path on the server. * Let "decode" replace the disk part of the path if the correct one according to your platform. This is done naively in the internal resource identifier / legacy api: def _stripDiskFromResourceIdentifier(self, resourceIdentifier, diskId): '''Return *resourceIdentifier* with disk (*diskId*) prefix removed. If *resourceIdentifier* starts with either the Windows or Unix path of the disk with *diskId*, then remove that portion from the returned result. .. note : Returned resource identifier is a relative path if *resourceIdentifier* is relative or it matches the Windows or Unix prefixes of the disk with *diskId*. ''' disk = self._disks[diskId] diskPrefixes = [disk.get('windows'), disk.get('unix')] # Sort the disk prefixes by length so that the longest matching is # always stripped. diskPrefixes = sorted(diskPrefixes, key=len, reverse=True) for diskPrefix in diskPrefixes: if resourceIdentifier.startswith(diskPrefix): # Matching disk found so remove disk prefix from identifier. resourceIdentifier = resourceIdentifier[len(diskPrefix):] if ntpath.isabs(resourceIdentifier): # Ensure that resulting path is relative by stripping any # leftover prefixed slashes from string. # E.g. If disk was '/tmp' and path was '/tmp/foo/bar' the # result will be 'foo/bar'. resourceIdentifier = resourceIdentifier.lstrip('\\/') # Longest matching disk prefix has been stripped. No need to # continue matching disks. break return resourceIdentifier Btw. I just want to make sure that you know that the current version of Connect and our Integrations (except for adobe plugins) are using the legacy api for publishing/import. This is something that we're about to change; using the new api for publishing + providing a compatibility layer so that you can still use locations defined with the legacy api. Just so you know - and not spend a lot of time on writing locations in the ftrack-python-api that you plan to use with our current versions of Connect publisher + Integrations.
tokejepsen Posted December 1, 2016 Author Report Posted December 1, 2016 1 hour ago, Mattias Lagergren said: And you're sure that it is not when called by "encode"? You are absolutely right! When running the code I only see a print out from the encode function. 1 hour ago, Mattias Lagergren said: * Add the disk prefix in the structure plugin rather than "encode". This should hopefully put the data in the correct place and will save an absolute path on the server. Wouldn't this hardcode the disk prefix when you launch ftrack-connect? So if I switch to a different project with the same ftrack-connect session, it'll remember the prefix from when I started ftrack-connect. 1 hour ago, Mattias Lagergren said: * Let "decode" replace the disk part of the path if the correct one according to your platform. This is done naively in the internal resource identifier / legacy api: Yup, I need to improve the logic. Currently just trying to get the data in the correct place 1 hour ago, Mattias Lagergren said: Just so you know - and not spend a lot of time on writing locations in the ftrack-python-api that you plan to use with our current versions of Connect publisher + Integrations. Ohh, I see. Think I did read something about the locations incompatibility between the apis. I'll have a look at using the old api. Should be the same workflow of using a resource identifier transformer?
Mattias Lagergren Posted December 2, 2016 Report Posted December 2, 2016 21 hours ago, tokejepsen said: Wouldn't this hardcode the disk prefix when you launch ftrack-connect? So if I switch to a different project with the same ftrack-connect session, it'll remember the prefix from when I started ftrack-connect. You can add it dynamically and determine it from the component that gets passed to the structure plugin. 21 hours ago, tokejepsen said: Should be the same workflow of using a resource identifier transformer? Yes, the resource identifier transformer is never used by any of our built-in plugins in the new api. The legacy api uses it for Connect location and Unmanaged location - so the support should be good.
Mattias Lagergren Posted December 2, 2016 Report Posted December 2, 2016 The proxy location solution is discussed here: It may lead to a more clear and dynamic solution as you can have one "ftrack location" per disk.
tokejepsen Posted December 6, 2016 Author Report Posted December 6, 2016 Hey, I've got some code where I'm using the old API and a resource identifier transformer. I just wanted to see if I could get the data to the right place; class ResourceIdentifierTransformer(object): def __init__(self): super(ResourceIdentifierTransformer, self).__init__() def encode(self, resource_identifier, context=None): print "encode" return self.transform_resource_identifier(resource_identifier, context) def decode(self, resource_identifier, context=None): print "decode" return self.transform_resource_identifier(resource_identifier, context) def transform_resource_identifier(self, resource_identifier, context): project = context["component"].getParents()[-1] system_name = platform.system().lower() if system_name != "windows": system_name = "unix" mount = ftrack.Disk(project.get("diskid")).get(system_name) return os.path.join( mount, project.get("root"), resource_identifier ).replace("\\", "/") I get the "encode" and "decode" printed, but the data doesn't go to the expected path of the project's disk and root. The resource identifier is correctly set in web UI.
Mattias Lagergren Posted December 7, 2016 Report Posted December 7, 2016 16 hours ago, tokejepsen said: I get the "encode" and "decode" printed, but the data doesn't go to the expected path of the project's disk and root. The resource identifier is correctly set in web UI. The encode/decode are called to encode the resource identifier before being saved on the server and decode it when retrieved from the server. When you use it (encode is called) the data will already be saved to disk and you will be asked to encode it before it is persisted to the server. You could try to add the disk in the Structure plugin, and then strip off the disk before persisting it to the server: Structure plugin generates path + disk. Resource transformer encode strips the disk so that the resource identifier is persisted relative to the mount point. Resource transformer decode appends the correct disk depending on the platform when the resource identifier is retrieved from the server.
tokejepsen Posted December 7, 2016 Author Report Posted December 7, 2016 Alright, thanks @Mattias Lagergren for your patience with this Got a location up an running now, that possibly need some more work but it works for now; import os import platform import ftrack class ProjectDiskStructure(ftrack.StandardStructure): def getResourceIdentifier(self, entity): project = entity.getParents()[-1] system_name = platform.system().lower() if system_name != "windows": system_name = "unix" mount = ftrack.Disk(project.get("diskid")).get(system_name) parts = [mount, project.get("root")] if not entity.isContainer(): container = entity.getContainer() if container: # Get resource identifier for container. containerPath = self.getResourceIdentifier(container) if container.isSequence(): # Strip the sequence component expression from the parent # container and back the correct filename, i.e. # /sequence/component/sequence_component_name.0012.exr. name = '{0}.{1}{2}'.format( container.getName(), entity.getName(), entity.getFileType() ) parts += [ os.path.dirname(containerPath), self.sanitiseForFilesystem(name) ] else: # Container is not a sequence component so add it as a # normal component inside the container. name = entity.getName() + entity.getFileType() parts = +[ containerPath, self.sanitiseForFilesystem(name) ] else: # File component does not have a container, construct name from # component name and file type. parts += self._getParts(entity) name = entity.getName() + entity.getFileType() parts.append(self.sanitiseForFilesystem(name)) elif entity.isSequence(): # Create sequence expression for the sequence component and add it # to the parts. parts += self._getParts(entity) sequenceExpression = self._getSequenceExpression(entity) parts.append( '{0}.{1}{2}'.format( self.sanitiseForFilesystem(entity.getName()), sequenceExpression, self.sanitiseForFilesystem(entity.getFileType()) ) ) elif entity.isContainer(): # Add the name of the container to the resource identifier parts. parts += self._getParts(entity) parts.append(self.sanitiseForFilesystem(entity.getName())) else: raise NotImplementedError( 'Cannot generate resource identifier for unsupported ' 'entity {0!r}'.format(entity) ) path = self.pathSeparator.join(parts) return path.replace("//", "/") class ResourceIdentifierTransformer(object): def __init__(self): super(ResourceIdentifierTransformer, self).__init__() def encode(self, resource_identifier, context=None): return resource_identifier.replace( self.get_project_directory(context), "" ) def decode(self, resource_identifier, context=None): return os.path.join( self.get_project_directory(context), resource_identifier ).replace("\\", "/") def get_project_directory(self, context): project = context["component"].getParents()[-1] system_name = platform.system().lower() if system_name != "windows": system_name = "unix" return os.path.join( ftrack.Disk(project.get("diskid")).get(system_name), project.get("root"), "" ).replace("\\", "/") def register(registry, **kw): location = ftrack.ensureLocation("project.disk.root") location.setAccessor(ftrack.DiskAccessor(prefix="")) location.setStructure(ProjectDiskStructure()) location.setResourceIdentifierTransformer(ResourceIdentifierTransformer()) registry.add(location)
Mattias Lagergren Posted December 8, 2016 Report Posted December 8, 2016 My pleasure! And it seems to be working?
tokejepsen Posted December 9, 2016 Author Report Posted December 9, 2016 I've revised the plugin after testing with sequence containers. Here is the working version; https://github.com/Bumpybox/bumpybox-core/tree/master/environment/FTRACK_LOCATION_PLUGIN_PATH
Recommended Posts
Archived
This topic is now archived and is closed to further replies.