Steve Petterborg

  • Content Count

  • Joined

  • Last visited

  • Days Won


Steve Petterborg last won the day on October 22 2019

Steve Petterborg had the most liked content!

About Steve Petterborg

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I don't know that we document the link attribute specifically, though entity types are constructed dynamically in the Python client, and the full specifications are available as session.schemas. I've developed a few convenience methods for myself, but you can query this particular attribute type via: session.types['AssetVersion'].attributes.get('link').data_type It gets a little weird because what we actually send over the wire, the JSON response to a query, treats it as a list of dicts. Also, there's not a fixed representation of the link (or _link or _api_link) attr in the db, so you can't really inspect how exactly the link attribute looks, nor do you really need to. Using ancestors should give you the sort of control you want. Including because I found it interesting: session.query( 'AssetVersion where link like ' '"%9fb3ff08-5fe5-11ea-a722-42010af00017%660cb3f7-bd3c-4cea-9afe-f38e1fe835a1"' ).one()
  2. Do you have "Schedule task dates with time" enabled? If so, disabling that might help.
  3. Hi Eder, Is that the real path on your computer? Have you tried upgrading to something newer? To just hide the errors, I do something like this for another lib in my dev environment: For this module specifically: urllib3.disable_warnings() General warning suppression: import warnings warnings.simplefilter("ignore", urllib3.exceptions.InsecurePlatformWarning)
  4. Hi Andriy, The trouble is "link" is a string attribute so you need to use the like operator ( project_tasks = session.query('Task where link like "%{}%"'.format(project['id'])).all() Another option would be using the ancestors attribute like so: project = session.query('Project').first() proj_tasks = session.query('Task where ancestors any (id is "{0}") or project_id is "{0}"'.format(project['id'])).all()
  5. Download new hook file. We've just updated the launcher (again) for Illustrator 2020 on OSX. Instructions for use are the same as the original post.
  6. Hey Smack, Is encode_media the only call that freezes up like that? What if you copy the file to the ftrack.server Location yourself? Also, consider something like what we do with Connect where we rely on the event system to handle submitting the encode_media call.
  7. Jed, I agree being more consistent in which time we use sounds reasonable. I rolled the system clock back on a local install, and here's what I observed: GET reflected the "slow" clock. A timer started via the API began with the expected amount of time on the clock, as displayed in the browser. Starting and stopping a timer with the API convenience methods created a timelog which showed the delta between the server and local time. I'll be really surprised if the various services that make up the cloud implementation could have different times, but I'll need to run that by some folks in the office.
  8. Hi Jed, Thanks for the report. Does it always happen that way? Server time seemed fine when I pinged your instance with curl just now. The behavior with the timer looping through negative numbers sounds very odd. What's your code look like for starting timers? I continue to test with something pretty minimal: session = ftrack_api.Session(auto_connect_event_hub=False) current_user = session.query(u'User where username is "{0}"'.format(session.api_user)).one() task = session.query('Task').first() current_user.start_timer(task)
  9. Hi Jen, I believe this operation exists to support displaying information in DCC apps, but not to support interactivity like it seems you're wanting. If you had access to the web engine displaying the widget, maybe it would be possible to extra something from the URL, assuming that even changes as the user navigates around. Here it is being used in the Python API:
  10. Hi Jakub, I believe you can query the server time with simple HTTP methods and examining the "Date" header. Alternately I can calculate and adjust for the offset this way: timer = current_user.start_timer(task) print timer['start'] - timer['start'] = arrow.utcnow() session.commit() Due to old issues with automatic time and date on macOS, my laptop is a consistent 1.7 seconds behind official time, as reported by my server or Since the local API session controls the end time, and therefore duration, of the Timelog, offsetting the start time when I create the Timer seems the easiest way to ensure the correct duration later. However, if you have different processes with different offsets running, then maybe you should leave the Timer object in server-time, and adjust the Timelog duration after you stop the timer.
  11. We have not updated the legacy API (module name ftrack) only the current API (module name ftrack_api). I don't believe we plan on updating it either. Perhaps we can assist in migrating to the new API? Or you can look at the (unofficial) Blender integration since that should use only the newer API.
  12. Hi John, You're using the Gantt scheduler on a Project's Tasks view, correct? I'm not aware of a modifier key to disable the behavior you noticed, which is based on the assumption that work is only being done on a work day. The list of workdays is under System Settings > Scheduling > Settings > Workweek. I understand you might not want to make the change there, so if it's a rare occurrence that you need to schedule a Task for the weekend, you can always edit the start and end dates manually by clicking on the Task name and editing the required attribute(s) in the info panel.
  13. import atexit import time import ftrack_api %%memit for _ in xrange(100): session = ftrack_api.Session(auto_connect_event_hub=False) for index, entry in enumerate(atexit._exithandlers): if entry[0] == session.close: break session.close() del atexit._exithandlers[index] Personally I do a lot of testing/hacking in Jupyter, so I'm using a "magic" annotation here. It also has some side-effects, so I have my own branch of memory-profiler. Lorenzo likes running memory profiler on the command line so he can get a nice graph. This was my first attempt at comparing the counts of object types for your mem leak, followed by my second approach to the handle function. In neither case did I create new sessions in the handler. def dd_compare(one, two): one_only = set(one.keys()).difference(two.keys()) if one_only: print 'Types we lost:\n {}'.format( '\n '.join(str(type_) for type_ in one_only) ) two_only = set(two.keys()).difference(one.keys()) if two_only: print 'Types we gained:\n {}'.format( '\n '.join(str(type_) for type_ in two_only) ) for key in set(one.keys()).intersection(two.keys()): if one[key] == two[key]: continue print '{}: {} -> {}'.format(key, one[key], two[key]) def handle_event(session, event): global after before = after after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 print dd_compare(before, after) before = collections.defaultdict(int) after = collections.defaultdict(int) for i in gc.get_objects(): before[type(i)] += 1 session = ftrack_api.Session(auto_connect_event_hub=True) print session.server_url handler = partial(handle_event, session) session.event_hub.subscribe('topic=*', handler) for i in gc.get_objects(): before[type(i)] += 1 session.event_hub.wait() def handle_event(session, event): global peaks after = collections.defaultdict(int) for i in gc.get_objects(): after[type(i)] += 1 for type_, count in after.items(): if count > peaks[type_]: print type_, count peaks[type_] = count print
  14. Hi Kay, I really appreciate the follow up, and your sharing your steps/experience with others. Of course, we're glad you came to a resolution too! -Steve