All Activity

This stream auto-updates     

  1. Today
  2. Yesterday
  3. Lorenzo Angeli

    0.1.0

    Windows Download 0.1.0 Linux Download 0.1.0 Osx Download 0.1.0 Documentation Changes since last version fixed: User settings crashes under osx and windows platform. fixed: Perforce password is not properly set. fixed: Workspaces breaks if contains spaces. new: Admin role for action gets checked against perforce roles too. new: User's workspace is created on first run if not already available. new: Init documentation. Requirement Accessible Helix Server What to expect to work Storage setup User preferences setup Publish Versioning Aim for testing Publish from any application Import/Update version through connect's asset management system Known Issues and limitations None known atm
  4. Last week
  5. Earlier
  6. Hi Francois, This sounds strange. And I can't seem to reproduce what you mention. Or maybe I have misunderstood. Could you provide us with some screenshots on what it looks like when you set it up? Regards, Johan
  7. Hi Konstantin, Thanks for the valuable feedback. I have added your comment to the feature request. Regards, Johan
  8. Hi, I think I may have stepped on a unwanted behaviour about versions status : when creating a new version, its status is the first of the selected status in system Settings → Schemas → My_Schema → Version. If you reorder the statuses (System Settings → Statuses) and move an another status used in your schema one top of the first one used and then save, the status of new versions will change accordingly. This is not true for tasks that are always created with the Not ready status (or so I believe). As a result, this is not intuitive at all for versions. It could be handy to be able to explicitly chose which status is the default one, for tasks and for versions.
  9. I do not see a mention of task's tracked time breakdown here. When i see a negative "+/- hours" on particular task i want to see who is responsible for that and when that happen. Assignee would be changed and task would be done in time but polishing took too long. Popup window on task's "Worked hours" value would be nice.
  10. I'd like to report here , for other users which might find similar issues, the root cause which has been resolved through the internal ticketing system. Here the full reply from our developer: we are looking now on how to properly fix this for an upcoming api release.
  11. Hi @tweak-wtf sadly there's no easy way for us to replicate this issue. We'll be looking though, to improve the error logging in the api to present a more complete report. Let us know if you have any progress on this. Cheers. L.
  12. Thank you for your answer and for reporting it to the development team. I hope they'll be able to fix it soon, this a feature we would really appreciate to use!
  13. Hi Francois, Sorry for late update. Yes, the way this works today is that you will see time for the account, independent of the project. This is not really expected behaviour, so I have reported it to development. Regards, Johan
  14. Hey @Lorenzo Angeli the output of sys.executable is: /prod/softprod/apps/maya/2018/linux/bin/maya.bin I've also printed the used python version if that's of any any use: 2.7.11 (default, Dec 21 2015, 14:39:44) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11) I have already shortly touched base with the pyblish developers on gitter. They pointed out to me that the actual key gets quoted. So if you have a look at my first stacktrace in this thread you see the KeyError is on '"storage_scenario"' but i guess it should rather be 'storage_scenario'. So that KeyError is actually raised when I want to print the ServerError which is handled in the __str__ method of exception.py. I think that is also the reason why I get ServerError: <unprintable ServerError object> in my last stacktrace. I don't know for sure but I think if I would manage to print the actual ServerError that would already be really helpful.
  15. Hi @instinct-vfx, glad to hear! Please feel free to report any issue or notes about your experience with them. Cheers. L.
  16. Hi @tweak-wtf , thanks for reporting back , and good idea to raise the overall logging level ! Sadly though, I cannot see anything in the errors that would help pin point the root cause. As the error seems to be happening from within the plugin only, and not from a plain shell or plain maya, I'd suggest have the pyblishs developers involved. If you get any further information coming out from them, please feel free to report here so we can cross examine the results. As last thing , could you please check what interpreter pyblish is using adding at the beginning of your script: import sys print sys.executable In the meanwhile we'll keep looking on potential causes on our side. L.
  17. Update: I just wrapped my connection call in try/catch block and printed the stacktrace... maybe that is also helping?! Call to connect to API: session = None try: session = ftrack_api.Session( server_url = FTRACK_SERVER, api_key = FTRACK_API_KEY, api_user = getpass.getuser(), ) except Exception as e: self.log.debug("Session: {}".format(session)) self.log.debug(traceback.format_exc(e)) Stacktrace: // pyblish.CollectFTrackInfo : Traceback (most recent call last): File "<string>", line 35, in process File "/prod/softprod/libs/ftrack_api/session.py", line 224, in __init__ self._server_information = self._fetch_server_information() File "/prod/softprod/libs/ftrack_api/session.py", line 1317, in _fetch_server_information result = self._call([{'action': 'query_server_information'}]) File "/prod/softprod/libs/ftrack_api/session.py", line 1619, in _call raise ftrack_api.exception.ServerError(error_message) ServerError: <unprintable ServerError object> //
  18. Hi @Lorenzo Angeli thanks for responding. Sure... I also sent an eMail with some insights to support so I'll just summarize what I wrote in that eMail. Currently I'm trying to build a 3D pipeline based on pyblish also integrating the published files with ftrack. For that I need to connect to ftrack API from a pyblish plugin. The error was popping up 2 days ago. Before that everything was working as normal. I talked to our SysAdmin and he told me that he didn't change anything about the network settings. The error only happens when trying to establish a connection from that pyblish plugin... Trying to connect from just a standard python interpreter ran from the terminal works. I went ahead and added some lines that log some variables to a file to the session.py in the ftrack_api python package in order to see what's going on. The output that I'm getting is the following: Trying to connect from terminal... ############# DATA ########## [{"action": "query_server_information"}] ############# RESULT ########## [{u'storage_scenario': {u'data': {}, u'scenario': u'ftrack.automatic'}, u'version': u'4.1.5.4750', u'is_timezone_support_enabled': False, u'schema_hash': u'450f452f8addcd23370a8a90f8156c4a'}] Trying to connect from pyblish plugin... ############# DATA ########## [{"action": "query_server_information"}] ############# RESULT ########## [{u'storage_scenario': {u'data': {}, u'scenario': u'ftrack.automatic'}, u'version': u'4.1.5.4750', u'is_timezone_support_enabled': False, u'schema_hash': u'450f452f8addcd23370a8a90f8156c4a'}] Trying to connect from Maya Script Editor... ############# DATA ########## [{"action": "query_server_information"}] ############# RESULT ########## [{u'storage_scenario': {u'data': {}, u'scenario': u'ftrack.automatic'}, u'version': u'4.1.5.4750', u'is_timezone_support_enabled': False, u'schema_hash': u'450f452f8addcd23370a8a90f8156c4a'}] Note that DATA logs the data being POSTed to the api server and RESULT logs the result returned from that POST. From that log I can't see any difference between the 3 different contexts from which I try to connect to the API. I think the problem has something to do with the execution from within the pyblish plugin but I don't understand why the error just appeared basically out of nowhere. I'm afraid that this is also something that is quite hard to reproduce for you?! Thanks for your answer!
  19. @tweak-wtf could you please try with a simple standalone python script rather than from within maya to see if you can replicate ? Thanks !
  20. Hi @tweak-wtf sorry to see you are having problems. The error seems to be coming from a wrong deserialization of the server response (json) relative to the storage scenario. This could be caused by a variety of things related to the network itself (proxy, firewall etc....) Could you please check if any of these changes have been put in place before this error have started happening ? I've trying to replicate it with you same version of the ftrack-python-api without success so far. Looking forward to hear more from you ! In the meanwhile we'll keep looking for other potential reasons. L.
  21. Hi, Thanks for the update. Regarding tasks: Task is the leaf object, so no object can be created on tasks. Regards, Johan
  22. Hey folks, since yesterday I get the following error when trying to connect to the ftrack_api. // ftrack_api.session.Session : Calling server https://[OUR_FTRACK_SITE]/api with '[{"action": "query_server_information"}]' // // ftrack_api.session.Session : Call took: 0.113607 // // ftrack_api.session.Session : Response: u'[{"storage_scenario": {"data": {}, "scenario": "ftrack.automatic"}, "schema_hash": "450f452f8addcd23370a8a90f8156c4a", "version": "4.1.5.4750", "is_timezone_support_enabled": false}]' // // Error: ftrack_api.session.Session : Server reported error in unexpected format. Raw error was: [{"storage_scenario": {"data": {}, "scenario": "ftrack.automatic"}, "schema_hash": "450f452f8addcd23370a8a90f8156c4a", "version": "4.1.5.4750", "is_timezone_support_enabled": false}] // # Traceback (most recent call last): # File "/prod/softprod/apps/maya/2018/linux/lib/python27.zip/logging/__init__.py", line 853, in emit # msg = self.format(record) # File "/prod/softprod/apps/maya/2018/linux/lib/python27.zip/logging/__init__.py", line 726, in format # return fmt.format(record) # File "/users_roaming/tdorfmeister/devel/ppStudio/common/libs/filmmore/logger.py", line 48, in format # logline = super(self.__class__, self).format(record) # File "/prod/softprod/apps/maya/2018/linux/lib/python27.zip/logging/__init__.py", line 465, in format # record.message = record.getMessage() # File "/prod/softprod/apps/maya/2018/linux/lib/python27.zip/logging/__init__.py", line 325, in getMessage # msg = str(self.msg) # File "/prod/softprod/libs/ftrack_api/exception.py", line 42, in __str__ # return str(self.message.format(**keys)) # KeyError: '"storage_scenario"' I'm just trying to connect using `ftrack_api.Session()` with my credentials and `auto_connect_event_hub=True`. Any help is much appreciated, since I really have no clue how to figure this one out on my own. Thanks PS: I should note that my api is on v1.3.3. I know... 😕
  23. Nevermind... figuered it out. Wasn't aware that ftrack_api is storing some querying in a local cache
  24. Hey folks, I just started out working with ftrack so please don't judge for my probably noobish question. I'm trying to set a thumbnail and ftrackreview-mp4 on a task. I already managed to upload an .mp4 and thumbnail to my assetversions but now I want to apply these files to the linked task aswell. I was trying to create a new component on a task but I will receive an error telling me that I can't create components on a Task. Also I have read about setting the Asset Types in the Workflow section of the ftrack settings but I'm kinda lost in that overview since I can't really understand what this Asset Type section is actually doing. Oh and I have to tell you that unfortunately I'm on ftrack_api v1.3.3 and probably not allowed to bump up the version 😕 Any help is much appreciated!
  25. Hi John, You can find information about locations and structure in the following documents: https://help.ftrack.com/developing-with-ftrack/key-concepts/locations https://bitbucket.org/ftrack/ftrack-recipes/src/master/python/events/customise_structure/ http://ftrack-python-api.rtd.ftrack.com/en/stable/locations/overview.html?highlight=structure Hope this helps. Regards, Johan
  26. Yes, Daniel is assigned to tasks on "PROJECT". He indeed didn't work on "PROJECT" in April. So I may have misunderstood the use of the User breakdown report. Isn't the intention to show the amount of time worked by every users on the selected project only? Am I missing something here? There is a filter that allows to select only users that have logged time, but this seems to take all projects into account and not to be limited to the selected project.
  27. Hi John, You can also add your vote on our roadmap: https://trello.com/c/hz4agtYK/65-new-custom-attributes-link-entities-type Regards, Johan
  28. Hi Francois, The User breakdown report will show time for accounts that are assigned to tasks on a project. So in your example I assume that Daniel is assigned to tasks on "PROJECT", even though no time has been reported on that project in April? Regards, Johan
  1. Load more activity