Jump to content

Store the event data


Remus Avram

Recommended Posts

The use-case of this is that we would like to know (for a limited time like the last 2 weeks) what was changed in the DB.

As well if we need to trigger jobs which take more then a few seconds when a specific event took place and something goes wrong, then we can check the DB or log on which task the trigger failed and start it again from that point. 

We can have an action for doing this. We were wandering if it's something which is doing the some thing already.

Link to comment
Share on other sites

On 10/20/2016 at 1:40 PM, Remus Avram said:

As well if we need to trigger jobs which take more then a few seconds when a specific event took place and something goes wrong, then we can check the DB or log on which task the trigger failed and start it again from that point. 

I see - something to think about when we have time to revisit the change events. 

A possible way to approach this problem that I know has been mentioned by other large clients is to separate and parallelise the event processing. So for example you could have a very simple event handler that just receives the ftrack.update/other events and then push it into a custom (amqp?) queue that you've setup with one or more workers pulling jobs from the queue. That way you can process events and minimise the risk of errors or other problems halting everything. You could also scale the workers for efficiency and with persistent queues you can be somewhat certain that you never miss any events.

Would be nice to hear what others are doing?

Link to comment
Share on other sites

  • 2 weeks later...

Hi Mattias,

thanks for asking! For the moment we are working at this. The plan is to have a second DB (not touching the Ftrack DB as we would like to update Ftrack Server easily), an action which get all the ftrack events and spool them to the custom DB, and an daemon which get from the custom DB and manage the data. It would be easier if the Ftrack log the events (json format) in a DB or a log file and in the Ftrack configuration set the max days to keep the data. By default should be 0 days if the studious don't use the data.

Link to comment
Share on other sites

  • 3 months later...

Fantastic! There is actually some development in this area. Mentioned with a rather technical release note in http://ftrack.rtd.ftrack.com/en/stable/release/release_notes.html#release-3.3.41:

Quote

Events, Local install

Added support for passing update events via messaging queue to event server instead of via synchronous endpoint. By default this is disabled but will be enabled by default in later releases.

See also

      Fetch update events directly from internal queue

This is very early stage and we're trying it out with a client, more information can be found here: http://ftrack.rtd.ftrack.com/en/stable/administering/managing_local_installation/configuring_server_options.html#administering-managing-local-installation-configuring-server-options-enable-amqp-event-server

The idea being that our update events is not passed directly to the event server (and emitted from there) but first goes trough our "amqp messaging queue". Using this you can attach your own custom amqp worker, which could be more fault tolerant compared to listening to the event server. E.g. the queue is persistent until restarted so a crashed "worker" will never miss any events.

Again, this is early days and I wouldn't call it experimental - but it is not production proven yet. Once it is we will start promoting it more as an alternative to the method that you use right now.

Link to comment
Share on other sites

1 hour ago, Mattias Lagergren said:

There is actually some development in this area. Mentioned with a rather technical release note

Sounds great! Just had a look at the release notes of version 3.3.41.

1 hour ago, Mattias Lagergren said:

The idea being that our update events is not passed directly to the event server (and emitted from there) but first goes trough our "amqp messaging queue". Using this you can attach your own custom amqp worker, which could be more fault tolerant compared to listening to the event server. E.g. the queue is persistent until restarted so a crashed "worker" will never miss any events.

This sounds the safest way of processing the events. This was one of our problem. How to be sure that all the events are processed.

1 hour ago, Mattias Lagergren said:

Again, this is early days and I wouldn't call it experimental - but it is not production proven yet. Once it is we will start promoting it more as an alternative to the method that you use right now.

Event if it's not production ready yet, I still have some questions:

  • are only the events with topic 'update', or all the topics;
  • for how long are the evens stored? maybe a setting in ftrack.ini with the no. of days to keep the events?
  • is there a status of the event? can we update it?
  • what information is stored there?

At the moment our system is quite flexible and can be changed to fetch the events from there.

Looking forward to here more about it! 

Link to comment
Share on other sites

17 hours ago, Remus Avram said:
  • are only the events with topic 'update', or all the topics;
  • for how long are the evens stored? maybe a setting in ftrack.ini with the no. of days to keep the events?
  • is there a status of the event? can we update it?
  • what information is stored there?
  • At the moment it is only events emitted by the ftrack application (update). Other events are not going through the queue as of now.
  • Events are stored until processed and the store is in memory and not persistent if the server goes down.
  • No real status, it is processed or not processed by the worker.
  • The information is the same as what you have in the event emitted by the event server.

To summarise this is a more reliable alternative to listening to the event server.

Link to comment
Share on other sites

  • 1 year later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...