Steve Petterborg

Administrators
  • Content Count

    20
  • Joined

  • Last visited

  • Days Won

    2

Steve Petterborg last won the day on October 22 2019

Steve Petterborg had the most liked content!

About Steve Petterborg

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi John, You're using the Gantt scheduler on a Project's Tasks view, correct? I'm not aware of a modifier key to disable the behavior you noticed, which is based on the assumption that work is only being done on a work day. The list of workdays is under System Settings > Scheduling > Settings > Workweek. I understand you might not want to make the change there, so if it's a rare occurrence that you need to schedule a Task for the weekend, you can always edit the start and end dates manually by clicking on the Task name and editing the required attribute(s) in the info panel.
  2. import atexit import time import ftrack_api %%memit for _ in xrange(100): session = ftrack_api.Session(auto_connect_event_hub=False) for index, entry in enumerate(atexit._exithandlers): if entry[0] == session.close: break session.close() del atexit._exithandlers[index] Personally I do a lot of testing/hacking in Jupyter, so I'm using a "magic" annotation here. It also has some side-effects, so I have my own branch of memory-profiler. Lorenzo likes running memory profiler on the command line so he can get a nice graph. https://pypi.org/project/memory-profiler/ This was my first attempt at comparing the counts of object types for your mem leak, followed by my second approach to the handle function. In neither case did I create new sessions in the handler. def dd_compare(one, two): one_only = set(one.keys()).difference(two.keys()) if one_only: print 'Types we lost:\n {}'.format( '\n '.join(str(type_) for type_ in one_only) ) two_only = set(two.keys()).difference(one.keys()) if two_only: print 'Types we gained:\n {}'.format( '\n '.join(str(type_) for type_ in two_only) ) for key in set(one.keys()).intersection(two.keys()): if one[key] == two[key]: continue print '{}: {} -> {}'.format(key, one[key], two[key]) def handle_event(session, event): global after before = after after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 print dd_compare(before, after) before = collections.defaultdict(int) after = collections.defaultdict(int) for i in gc.get_objects(): before[type(i)] += 1 session = ftrack_api.Session(auto_connect_event_hub=True) print session.server_url handler = partial(handle_event, session) session.event_hub.subscribe('topic=*', handler) for i in gc.get_objects(): before[type(i)] += 1 session.event_hub.wait() def handle_event(session, event): global peaks after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 for type_, count in after.items(): if count > peaks[type_]: print type_, count peaks[type_] = count print
  3. Hi Kay, I really appreciate the follow up, and your sharing your steps/experience with others. Of course, we're glad you came to a resolution too! -Steve
  4. Thanks for the env, Toke! What's everyone using for monitoring anyway? I've been using things built around gc.get_objects() and memory-profiler lately.
  5. Hi Jakub, Sorry I missed your reply. (Now following the thread.) Thanks for the info, it gives me something to try out, and I'll verify with dev ops that the server time is/how to query it. My guess is your best bet would be to start and stop a timer through the web UI, then query that through the API, and compare the reported time with a time service to determine whether the recorded server start time matched what you would expect.
  6. Hi Toke, Can you try updating to API version 1.8.2 if you have not yet? We added a fix for mem leaks when creating and destroying many Sessions.
  7. Hi Jakub, Could you give some more examples, please? You're starting a timer through the web UI, not the API, right? And the server is one of our cloud instances or something you have installed locally? Assuming you're using the web UI, is the duration correct but just the start and end times offset? There is/was a bug where reloading the tab which started the timer, or stopping the timer in a new tab or browser would cause an offset. (The duration would be correct, but the stop time would be recorded as the start time and the recorded stop time would be in the future.)
  8. Hi Kay, which OS are you using? It works fine for me on OSX. Does the action work correctly on previous versions of Photoshop? Can you verify that you have published a PSD file to the task you have selected in Connect?
  9. Download new hook file. The installation directories for Adobe Creative Cloud applications have changed with the recent release of 2020. In order to show the new applications, you must replace the existing hook file, which can be found in the following directories: macOS: /Applications/ftrack-connect.app/Contents/MacOS/resource/hook Windows: C:\Program Files (x86)\ftrack-connect-package-1.1.1\resource\hook This update also adds Adobe Illustrator to Connect. Note that the integration itself continues to be distributed through Adobe Add-ons.
  10. Hi Jen, You'll want to find the Event object which was created in response to the publish and change the user_id. Unlike the info panel, the activity feed didn't seem to refresh automatically for me, so make sure to reload the page when you make your change. event = session.query( 'Event where action is "asset.published"' ' and parent_id is "{}"'.format( asset_version['id']) ).one() event['user_id'] = cool_user['id'] session.commit()
  11. Hi Jen, I think the issue is that we don't really know what's running on the same computer. With the Python API's event hub, we can either broadcast a message to the server, which is relayed to other listeners, or we can send that message to only the plugins that that API session has loaded. Setting aside the fact that the Adobe plugin is using Javascript (and doesn't have the local or synchronous option), there's still no middle ground of "transmit this event to other processes, but only those on my local machine". It would be possible to filter on the source id attribute of the event, but you'd still have to register each id as local--so every new browser session or new instance of Photoshop would have to get added to some list maintained by Connect. There are a few workflows that would benefit from that third option, and I have a rough proof of concept idea in my head, but we'd have to balance a more-thorough implementation with other priorities.
  12. Sans Connect, are you calling session.event_hub.wait() anywhere? (either after you make the session, or inside the register function of the resolver, but I don't recommend the latter.) That's required to actually poll the event queue and respond. I assume "Custom resolver.py NOT picked up" means that the path is still red. You can check whether the plugin has been loaded by searching sys.modules for your filename, but do note that the namespace will be a UUID.
  13. Hi John, The path you'd like resolved, where is that file stored? resolve.py (really, a connected API session subscribed to the ftrack.location.request-resolve topic) could be running anywhere. The reason we let Connect handle it by default, and presumably why Lorenzo assumes you're running it on each artists' workstation is that it's possible that the mounts or file paths are different on each workstation. If you're saving to a common location and can assume the path for each person, you could just have one central API session listening to the event and serving up the response. Does that clear up anything? You said you had a tool running, but that it was still failing to resolve in the web UI? Did you confirm you were receiving events? And responding appropriately?
  14. Hi Tom, What do you mean by "old components"? Are you replacing files on an AssetVersion (and want to make sure you don't have orphaned files on the ftrack.server Location)? Do you want to find Components for which a more-recent AssetVersion has a Component with the same name?
  15. Is there more to the error? Can you post some more of the log? What are you doing when the error pops up? Can you load the Diagnostics page in Server Settings?