Jump to content

Is it still necessary to add a newly created object to their parent?


Fernando

Recommended Posts

Hello,

Previously, I had the following code when creating a new shot (and have similar code for creating an asset/sequence/task):

 

new_shot = session.create('Shot', {
        'name': shot_code,
        'description': shot_description,
        'parent': ftrack_parent_folder
    })
ftrack_parent_folder['children'].append(new_shot)
session.commit()

This has always worked in the past, but suddenly I have been starting to get the following error:
 

ftrack_api.exception.DuplicateItemInCollectionError: Item <dynamic ftrack Shot object 139906143642576> already exists in collection <ftrack_api.collection.Collection object at 0x7f3e6fe8f950>

When adding the shot to the parent folder, in this line:

ftrack_parent_folder['children'].append(new_shot)


Does it mean it is no longer necessary to add the shot to the children list of it's parent because it is already added automatically when creating the shot? I've always found it strange that I needed to specifically add the newly created object to its parent as well.

Cheers,

Fernando

 

Link to comment
Share on other sites

  • 2 weeks later...

My apologies for the late reply, this problem is a bit more complex then I thought it was.
My initial problem statement is probably wrong: adding to the children does not create a problem.

My actual problem seems to be that the ftrack server is returning errors that are not related to the current session.commit().
For instance, an error will occur creating a certain task, and then I will receive the same error for hundreds of other updates/creation session.commit() requests.

This is the error I receive that fails to create a shot task (above the doted line is just a json object of the message data):

b'{"type":"shotTask","event":"create","data":{"obj":{"id":1046164,"stage":"compositing","shot":"070_0010","revision":{"id":1342158,"number":1,"date":"2022-08-24T08:27:10.182Z","author":5899},"name":"Compositing","status":"cancelled","user":null,"beginDate":null,"endDate":null,"dueDate":null,"bidDays":null,"spentDays":null,"customProperties":null,"_type":"shotTask","_postProcessed":true},"authorID":5899}}'
-----------------------------
Traceback (most recent call last):
  File "rabbitmq_receiver.py", line 340, in callback
    functions_dict[event+" "+type](json_object['data'])
  File "rabbitmq_receiver.py", line 168, in create_shot_task
    name, begin_date, end_date, due_date, priority, bid_days, user)
  File "/root/workspace/ftrack-scripts/src/ftrack_common_lib/ftrack_common.py", line 362, in ftrack_create_task
    session.commit()
  File "/usr/local/lib/python3.6/site-packages/ftrack_api/session.py", line 1308, in commit
    result = self.call(batch)
  File "/usr/local/lib/python3.6/site-packages/ftrack_api/session.py", line 1687, in call
    raise ftrack_api.exception.ServerError(error_message)
ftrack_api.exception.ServerError: Server reported error: IntegrityError((MySQLdb._exceptions.IntegrityError) (1062, "Duplicate entry 'b45adbd0-4b7f-4f35-8f92-2c8dc0d5c215-Compositing (compositing)' for key 'context_parent_id_key'") [SQL: u'INSERT INTO context (context_type, name, parent_id, created_at, id, created_by_id) VALUES (%s, %s, %s, %s, %s, %s)'] [parameters: ('task', 'Compositing (compositing)', u'b45adbd0-4b7f-4f35-8f92-2c8dc0d5c215', datetime.datetime(2022, 8, 24, 9, 0, 1, 671123), u'41842636-ee23-4210-ab87-f72e8aafab2b', u'bcdf57b0-acc6-11e1-a554-f23c91df1211')] (Background on this error at: http://sqlalche.me/e/gkpj))

And after this error occurs, I will receive the exact same error for hundreds of other server requests, that are unrelated to the previous shot task.
for instance here is an asset task update request, that returns the same integrity error:

b'{"type":"assetTask","event":"update","data":{"id":931987,"obj":{"id":931987,"asset":"cer_marigold","stage":"surfacing","variant":"default","revision":{"id":1342301,"number":6,"date":"2022-08-24T08:33:21.943Z","author":54887},"name":"Prop surfacing","status":"in_progress","user":"rolandf","beginDate":"2022-08-24","endDate":null,"dueDate":"2022-09-23","bidDays":null,"spentDays":null,"customProperties":null,"_type":"assetTask","_postProcessed":true},"existingObj":{"id":931987,"revision":{"id":1319613,"number":5,"date":"2022-08-18T07:25:07.939Z","author":16242},"asset":"cer_marigold","stage":"surfacing","variant":"default","name":"Prop surfacing","status":"ready_to_start","user":"rolandf","beginDate":null,"endDate":null,"dueDate":"2022-09-23","bidDays":null,"spentDays":null,"customProperties":null},"authorID":54887}}'
-----------------------------
Traceback (most recent call last):
  File "rabbitmq_receiver.py", line 340, in callback
    functions_dict[event+" "+type](json_object['data'])
  File "rabbitmq_receiver.py", line 42, in update_task
    end_date, due_date, task_name, task_type, new_user, bid_days, priority)
  File "/root/workspace/ftrack-scripts/src/ftrack_common_lib/ftrack_common.py", line 220, in ftrack_update_task
    session.commit()
  File "/usr/local/lib/python3.6/site-packages/ftrack_api/session.py", line 1308, in commit
    result = self.call(batch)
  File "/usr/local/lib/python3.6/site-packages/ftrack_api/session.py", line 1687, in call
    raise ftrack_api.exception.ServerError(error_message)
ftrack_api.exception.ServerError: Server reported error: IntegrityError((MySQLdb._exceptions.IntegrityError) (1062, "Duplicate entry 'b45adbd0-4b7f-4f35-8f92-2c8dc0d5c215-Compositing (compositing)' for key 'context_parent_id_key'") [SQL: u'INSERT INTO context (context_type, name, parent_id, created_at, id, created_by_id) VALUES (%s, %s, %s, %s, %s, %s)'] [parameters: ('task', 'Compositing (compositing)', u'b45adbd0-4b7f-4f35-8f92-2c8dc0d5c215', datetime.datetime(2022, 8, 24, 9, 0, 4, 166823), u'41842636-ee23-4210-ab87-f72e8aafab2b', u'bcdf57b0-acc6-11e1-a554-f23c91df1211')] (Background on this error at: http://sqlalche.me/e/gkpj))

So my first question is how could this be happening? why would the server still return the same error?
My second question will be why the first shot task creation even fails: it is not possible that this task already exists. Is there any way to get more information on this "integrity error"?

I've also attached a py file containing my two functions to create and update an ftrack task, maybe that can help to identify the problem.

Cheers,
Fernando

ftrackcreatetask.py

Link to comment
Share on other sites

Hi Fernando,

The ftrack API is session based. Any operations you do are saved into the session. When you `commit()` these operations are persisted on the server. If there is a failure, the operations won't be persisted on the server and they also won't be removed from the session. If you don't handle the error and just leave the failing operation in the session's queue and `commit()` again, even after having added more operations, things will fail again.

Methods like rollback and reset will help you manage the session's operations.

In your case, it looks like one task's creation failed following a `commit()` operation but your script kept on going adding more create, update or delete operations but failing every time it tried to commit because of the original task creation.

Link to comment
Share on other sites

Hi Patrick,

Thanks for the answer, that will indeed be the case then.
I've started using rollback() in case of an exception when committing and that seems to have solved the problem! So thank you so much for the hint.

I have one more question regarding session.reset(), I thought I would use it to refresh my connection to the ftrack server every once and while (to avoid a timeout).
Is this the correct thing to do, or is there another way to keep a connection from timing out?

Cheers,
Fernando
 

Link to comment
Share on other sites

  • 3 weeks later...

Hi Fernando,

The `session.reset()` call only clears the internal cache of objects, it doesn't affect the connection.

Each call to the server is independent and always re-authenticated at every call. Given that way of working and the stateless nature of HTTP, there is no timeout on the session lifetime itself. You can hit a timeout if a _single query to the server_ takes too much time. This is capped at 60s by default.

Cheers,
Patrick

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...