All Activity

This stream auto-updates     

  1. Today
  2. Actually i managed to got working extension with npm in development mode (with two modifications). But, yarn is a good point i will keep trying to build with it. Actual log attached. And my message was about to update documentation so modification of Adobe extension for others would not be so painful. yarn_error.txt
  3. import atexit import time import ftrack_api %%memit for _ in xrange(100): session = ftrack_api.Session(auto_connect_event_hub=False) for index, entry in enumerate(atexit._exithandlers): if entry[0] == session.close: break session.close() del atexit._exithandlers[index] Personally I do a lot of testing/hacking in Jupyter, so I'm using a "magic" annotation here. It also has some side-effects, so I have my own branch of memory-profiler. Lorenzo likes running memory profiler on the command line so he can get a nice graph. https://pypi.org/project/memory-profiler/ This was my first attempt at comparing the counts of object types for your mem leak, followed by my second approach to the handle function. In neither case did I create new sessions in the handler. def dd_compare(one, two): one_only = set(one.keys()).difference(two.keys()) if one_only: print 'Types we lost:\n {}'.format( '\n '.join(str(type_) for type_ in one_only) ) two_only = set(two.keys()).difference(one.keys()) if two_only: print 'Types we gained:\n {}'.format( '\n '.join(str(type_) for type_ in two_only) ) for key in set(one.keys()).intersection(two.keys()): if one[key] == two[key]: continue print '{}: {} -> {}'.format(key, one[key], two[key]) def handle_event(session, event): global after before = after after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 print dd_compare(before, after) before = collections.defaultdict(int) after = collections.defaultdict(int) for i in gc.get_objects(): before[type(i)] += 1 session = ftrack_api.Session(auto_connect_event_hub=True) print session.server_url handler = partial(handle_event, session) session.event_hub.subscribe('topic=*', handler) for i in gc.get_objects(): before[type(i)] += 1 session.event_hub.wait() def handle_event(session, event): global peaks after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 for type_, count in after.items(): if count > peaks[type_]: print type_, count peaks[type_] = count print
  4. Hi, The Adobe extension consists of two parts: ftrack-connect-spark-adobe (adobe specific logic) and ftrack-connect-spark (shared UI). When building the extension, it copies the UI distribution (`node_modules/ftrack-connect-spark/dist/`) to build/staging/ftrack_connect_spark/. I don't actually think the distribution files are checked in into ftrack-connect-spark, which means that you either have to build those files locally or copy the files over from the extension if you are not making any changes to the UI. To build the extension with changes to the UI, you will need to: Clone the ftrack-connect-spark repository Build it (yarn install && yarn dist) Configure ftrack-connect-spark-adobe to use the local package In ftrack-connect-spark: yarn link In ftrack-connect-spark-adobe: yarn link ftrack-connect-spark Rebuild ftrack-connect-spark-adobe using grunt Regards, Lucas
  5. tokejepsen

    Memory leak

    Good point. Havent been able to trace down the memory leak, so if you have some pointers that would be great!
  6. Hi Kay, I really appreciate the follow up, and your sharing your steps/experience with others. Of course, we're glad you came to a resolution too! -Steve
  7. Thanks for the env, Toke! What's everyone using for monitoring anyway? I've been using things built around gc.get_objects() and memory-profiler lately.
  8. Yesterday
  9. Actually the issue was that I replaced the wrong .py file (smh). Our studio is a MacOS environment and I fixed the issue last month so here are the steps I took in case anyone else has an issue looking for the Contents folder: In Finder, go to your Applications folder > Right-click on ftrack-connect.app > Show Package Contents > MacOS > resource > hook > replace ftrack_connect_adobe_hook.py with the hook from Steve's first post above. ‘Log & Quit’ ftrack Connect, open it and log back in. Photoshop 2020 action should show up now.
  10. We're using the Notification event listener and modified it to listen for 'Notes' statuses. It's worked pretty well so far and can be easily modified. Thanks to Mattias for making it! Here's the bitbucket link https://bitbucket.org/snippets/ftrack/9qpXB/event-listener-notification-example
  11. 4. And if i copy extension i got following, but expected to see ftrack panel as in released ftrack adobe extension
  12. Thanks to all your feedback and bug reports, we are finally getting closer to a final release of the upcoming api 2.0 for python2/3 . If you want to install (at your own risk) the latest RC2 version please use the following command: pip install ftrack-python-api==2.0.0rc2 Full change log can be found here please report through the usual channels any issue or bug you might be finding. Cheers. L.
  13. 1. Your building_from_source.rst does not mention that on Windows you have to install windows-build-tools and grunt. I did it globally. 2. I didn't managed to start debug with following command, log attached grunt debug --family=CC2018 3. you still have no issue reporting platform(( out.txt
  14. Last week
  15. tokejepsen

    Memory leak

    Hey Steve, I tried 1.8.2 and it does not fix our issue. Since other people are not experiencing the same thing, I'm suspecting it might be our environment. Would greatly appreciate if someone could test on their end with this (conda) environment? name: ftrack-pipeline-environment channels: - defaults dependencies: - certifi=2019.6.16=py27_0 - pip=19.1.1=py27_0 - python=2.7.16=hcb6e200_0 - setuptools=41.0.1=py27_0 - sqlite=3.29.0=h0c8e037_0 - vc=9=h7299396_1 - vs2008_runtime=9.00.30729.1=hfaea7d5_1 - wheel=0.33.4=py27_0 - wincertstore=0.2=py27hf04cefb_0 - pip: - arrow==0.14.2 - backports-functools-lru-cache==1.5 - cachetools==3.1.1 - chardet==3.0.4 - clique==1.5.0 - ftrack-python-api==1.8.2 - google-api-python-client==1.7.9 - google-auth==1.6.3 - google-auth-httplib2==0.0.3 - google-auth-oauthlib==0.4.0 - httplib2==0.13.0 - idna==2.8 - jsondiff==1.2.0 - oauthlib==3.0.2 - pyasn1==0.4.5 - pyasn1-modules==0.2.5 - pyparsing==2.4.0 - python-dateutil==2.8.0 - requests==2.22.0 - requests-oauthlib==1.2.0 - rsa==4.0 - six==1.11.0 - slacker==0.9.65 - slacker-log-handler==1.7.1 - termcolor==1.1.0 - uritemplate==3.0.0 - urllib3==1.25.3 - websocket-client==0.56.0 prefix: C:\Users\admin\miniconda\envs\ftrack-pipeline-environment
  16. Hi Jakub, Sorry I missed your reply. (Now following the thread.) Thanks for the info, it gives me something to try out, and I'll verify with dev ops that the server time is/how to query it. My guess is your best bet would be to start and stop a timer through the web UI, then query that through the API, and compare the reported time with a time service to determine whether the recorded server start time matched what you would expect.
  17. Hi Toke, Can you try updating to API version 1.8.2 if you have not yet? We added a fix for mem leaks when creating and destroying many Sessions.
  18. tokejepsen

    Memory leak

    Hey, I've been trying to figure out why our Ftrack event listener machine is running out of memory. Seems like the event listener is holding onto something, so it does not get garbage collected. Take a simple event listener like this: import ftrack_api def test(event): session = ftrack_api.Session() session = ftrack_api.Session(auto_connect_event_hub=True) session.event_hub.subscribe("topic=ftrack.update", test) session.event_hub.wait() It initially starts at 27 Mb, but with every ftrack event that triggers it, a couple of Mb get added. It never gets back down to 27 Mb. Anyone experiencing the same?
  19. Hi Jen, Thanks for bringing this to our attention. I understand your need for this feature and I've added this as a request. Thanks again! Regards Simon
  20. I'd suggest to stick to C7 as long as possible atm. By direct experience C8 is not ready yet for production. L.
  21. Thanks Lorenzo for the answer. We'll just use Centos 7 for this production then. Maybe I'll try to make something work with the code. N
  22. @Nero Demaerschalk, Centos 8 is something we'll be looking into as soon as the distro is stable for desktop use (I'd suggest waiting more than .1 release) For python3 we have a dedicated thread in the forum at this address, feel free to test and let us know if you find any issue. We are working in these days to release a new api 2.0 for python 2 and python 3 for wider use, but the inclusion in connect will require more time due to some non backward compatible changes in the api. Hope it helps. L.
  23. Hi @jen_at_floyd, there's a patch already merged in connect to mitigate this and it'll be available in an upcoming version of connect.
  24. Earlier
  25. On our Mac machines, it's currently impossible to open multiple instances of the ftrack-connect application. Launching the app multiple times has no effect because the launches are effectively canceled by the ftrack-connect instance that is currently running. On Windows we do not see this behavior and it creates an adverse user experience. In the following screenshot you can see that multiple instances of ftrack-connect are running at once: In many cases the second instance is opened accidentally, but it shouldn't be possible at all.
  26. Sorry for reviving an old thread but an Undo feature is pretty high on my studio's list of feature requests. We sometimes run into an issue where our workers delete a task when they actually meant to delete a version or vice versa. A simple way to revert the past operation would be great.
  27. Hi there, Just wanted to know what the progress is in bringing ftrack-connect to centos 8 (and python 3 for that matter). Since the vfx-platform sees 2020 as the jump to python 3, can we expect something soon?
  28. Ok so having update events not show up via the event hub when AMQP is enabled is the expected behavior. Thanks for clarifying!
  29. Hi, thanks for bringing this to our attention. We updated our documentation with new information: Please note that this is an advanced feature which requires you to setup and and run your own https://www.rabbitmq.com/ broker instead of the one bundled with ftrack. Once you have your own rabbitmq server running you can update the ftrack.ini "ftrack.amqp_host" setting to have ftrack use that server instead. This will not be standard until we make the switch to Rabbitmq. Thanks again! Regards Simon
  1. Load more activity