Jump to content

Brad

Members
  • Posts

    3
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Brad's Achievements

  1. TLDR: it seems ftrack_connect's base plugin directory is hardcoded and cannot be fully repointed on windows. Separate ftrack_connect installs always share it. It seems the live SaaS ftrack server is a live network service and is (necessarily) shared between production and development. That's dangerous to code with. Explanation: I'm trying to both use FTrack and also do development with it in parallel. While I may be a 'one-man-shop' this is actually exactly how I'd expect any major facility to use it. They'll have an existing release or production pipeline that's in use, using FTrack. And also, they'll have a pipeline or development team that's working on enhancing that pipeline at the same time. Hopefully in various development sandboxes. In an ideal situation, a development track is a branch from production. (yes, I know of plenty of facilities that do all their pipeline dev live in their production environment. but I'd like to talk about the more mature and refined approach here rather than barbaric practices.) I am currently hitting two major roadblocks to keeping separate "production" (prod) and "development" (dev) environments for ftrack and ftrack_connect. I'd like to describe my current setup and why those two issues (listed above) are causing problems. And further if anyone has feedback or suggestions to avoid these issues I'd love to hear them. I've only been digging into FTrack for about a week at this point and I may be unfamiliar with existing, better solutions. My setup: The root of my setup is currently anaconda (minicoda specifically) Miniconda — Conda documentation The reason I use conda is because it allows me to pick a specific version of python and hide my system level python. I had issues installing ftrack_connect from the repo (ftrack / ftrack-connect — Bitbucket) because of pyqt version dependencies. And I found it better to just discern which version of Python you are using for your Windows installable and start with that version of python. This is more common a problem with python development than you'd think. Also, anaconda is supported by PyCharm, my primary development IDE for python. This is something that simple python virtual environments (venv) don't help with. Anyhow, I have an anaconda environment that I'll call "dev" and it's got the exact same version of python you are embedding in your binary installer. I can then easily pull from bitbucket and install with "pip install -e ." Upon completion of that, I can just run python -m ftrack_connect and run ftrack_connect. All seems well. I can even debug it by loading up the git repo as a project in pycharm and setting up the debugger to run the same module. On it's surface, this is a great development environment both for ftrack_connect and pulgins. Further, I could setup another anaconda environment that I'll call "prod" and do the same steps, but checkout specific versions of ftrack_connect and my own plugins (rather than dev HEAD, or feature branches) and I'd have parallel dev and prod installs of ftrack_connect that are easily entered via conda. However, there are two problems with this. Issue #1: Both installs (env, prod) of ftrack_connect ALWAYS put their base plugins in the same directory. This seems to be because they're using the 'appdirs' package to set their initial plugin path: appdirs · PyPI It's true that you can specify a plugin path via the environment: Plugin directory — ftrack connect Unknown version documentation However, I'd point out the documentation is pedantically accurate here when it says: "you can add additional places where" If I set up different plugin directories for "dev" and "prod" via the referenced environment variable, it doesn't change anything about the usage of the shared primary plugins directory (under my home_dir/AppData/Local/ftrack/...). Even though both "installs" of ftrack_connect are different versions of the code with different ENV vars for the plugins dir. What I need here is an "override plugins dir" ENV var, not an "additional plugins dir" ENV var. That way each environment would stand alone. The best workaround I can come up with at the moment is setting up a separate user on my workstation. Because that user will have a different home_dir and hence a different AppData dir. This is a terrible idea. Issue #2: Developing network services is hard. No doubt. Mostly because the development environment to safely branch or spin up a development network services infrastructure is often non-trivial. Even if I fully solve Issue #1, I'm stuck with my development code running against a production network service. If I make a mistake, I can corrupt the live production data ridiculously easily. I am of course aware of the possibility of moving to local installs of FTrack and maintaining a separate development server. I used to run pipeline development at a company that had such a solution using a similar product (that looms large and shall go un-named). But c'mon: I'm a 'one-man-shop' here. And further, since ftrack_connect is largely an open source code-base, you'd want consultants and freelance TDs; to be able to maintain similar small setups with your SaaS service, and still be able to develop responsibly in sandboxed network service spaces. I think what is needed is a simple "clone server to dev instance" and "destroy dev instance server" solution here, for your SaaS customers. I haven't looked too deeply into how your local installs work, but I've seen enough to know that this should be possible. As far as exploitation goes: the worst scenario is that someone decides to spin up their dev instance and use it as a prod instance, hence, doubling their license count. But that'll bite them eventually. Limiting it to a single "dev" clone spin-up at "dev.<mycompany>.ftrackapp.com" should be limited enough and will promote healthy development practices for all clients, at all sizes. You can even expand that to charging for additional simultaneous dev spin-ups, for companies with large development teams. That's my 2 cents anyway. Thanks for taking a look. Any better solutions or ideas are welcome. Also, if I'm seriously misunderstanding how ftrack and ftrack_connect are working/meant-to-work here, please advise (kindly). It's always possible I'm way off base.
  2. Just a note: I spent a good amount of time today chasing my tail around re: how ftrack-application-launcher actually works. And in the end, I've found a rather critical piece of information is missing from the docs. tldr: FTRACK_APPLICATION_LAUNCHER_CONFIG_PATHS needs to be documented so we might know how to actually provide configuration files without having to modify the plugin itself. explanation: ftrack / ftrack-application-launcher / resource / hook / application_launcher.py — Bitbucket lines 27-31 tell all. consider the docs: Developing — ftrack-application-launcher 1.0.5 documentation They tell all about the syntax, but nothing about where the files should go. Eventaully you might discover the "config" directory within the plugin itself. However, that directory probably shouldn't be modified because after all, it's got a version number in it. Presumably, a new version of the plugin will have its own config directory. Your changes would be wiped out. Checking the source on bitbucket confirmed for me that by default, it just hardcodes that path (see line 25). But also, that there's an ENV var that can be used to at least specify some extra locations to search: FTRACK_APPLICATION_LAUNCHER_CONFIG_PATHS
  3. Is there a way to specify a robot or apiuser username when creating a ftrack_api.Session object? Please note, I'm aware of API_KEYs. I'm referring specifically to the username that is required upon login, along with an API_KEY. Currently I have a single account. And that account is me. I'm developing a server daemon that's going to subscribe to some actions and do some things. Those things will both be out in the world, and also within ftrack. I am able to get it to log in using an API_KEY but only with my username. Which may make sense for tools that a user might be initiating. Things I'd be doing using tools should still be logged as being done by me I'd guess. However, if I do that for a central server daemon, that's doing things on behalf of the 'studio', I'd assume the server daemon will be masquerading as me. Which is problematic. It would be much better to have it on its own user. Such as 'robot' or 'api_user' or the like. However, I don't see any kind of support for that in the docs. I tried creating a user: "mr robot" and leaving it disabled. The API returns an error because the user is not enabled. Which makes sense. You wouldn't want the ability to log in and do things with arbitrary numbers of unpaid (unlicensed) disabled users. That could be abused. Is there a standard daemon or robot user I could use to make it clear that the daemon or API is making changes in a global manner, rather than as a specific user account? I'd think that would avoid both problems. Neither would it be pretending to be me, nor would it open the door to arbitrary numbers of freeloading zombies. Is there somewhere in the documentation that I've missed where this specific issue is addressed? Or an alternative approach? Thanks. -brad
×
×
  • Create New...