Link Search Menu Expand Document

ASWF CI Working Group

Meeting: 18 August 2021

https://zoom.us/j/757849640?pwd=QzE1K2hrL2FHSFhKK3h5Z3BWTFJsZz09

Attendees

  • Jean-Francois Panisset (VES Technology Committee)
  • Aloys Baillet (Animal Logic)
  • Larry Gritz (Sony Imageworks)
  • Sergio Rojas (Arena World)
  • Tiago Carvalho, Rust WG
  • Andrew Grimberg, LF RelEng
  • Sean Looper, AWS
  • Deke Kincaid, Digital Domain
  • David Aguilar, Disney Animation

Apologies

  • Daniel Heckenberg (Animal Logic), WG Chair

New items

  • Transition to official WG proposal for next TAC meeting

  • Rust WG CI needs

    • Conversation on #rust Slack channel

    • Tiago: we are also playing with and doing investigation of how everything will work, becoming clear that have a dependency on cppmm, llvm/clang based tool that processes headers (for instance OpenEXR), doesn’t need the whole library, only the headers, will generate the boilerplate code that is then consumed by the Rust side. Very unsafe / C-like, then manual step to wrap in Rust code. Cppmm is automated. At a crossroads right now: one approach is to run cppmm manually by someone “in charge”, and the generated files will be committed, but Tiago thinks that’s just a short term solution, once the project is migrated to the OpenEXR project, they will want to make sure that their next version is cppmm-ready, so the bindings can be tested automatically.

    • JF: desirable not to checkin files that can be generated reasonably quickly. Tiago: agreed.

    • Tiago: tried to make use of existing Docker container from aswf-docker project, but because of dependency on llvm 11 (or maybe 10?), ended up just building on Ubuntu. That’s the biggest dependency, probably don’t need the dependency for OpenEXR, might be easier to do their own cppmm image to generate the bindings. Is there something the CI group can offer to generate images?

    • Andrew: we have a repository on Docker Hub that could be used.

    • JF: repos under github.com/AcademySoftwareFoundation have access to tokens. Andrew: and paid for hub account, so no throttling.

    • Tiago: don’t want to bloat the foundation’s Docker Hub? Eventually will be more formalized. JF: lots of benefits to using the paid for ASWF Docker Hub.

    • JF: could cppmm be part of aswf-common container? Aloys: 2022 containers have llvm 11. But agree with Tiago that for now it may be better to iterate quickly outside of the ASWF containers. Can use aswf-testing organization, will provide the name of the image that can be used that has llvm 11. JF: could just use FROM: aswf-common and have a single layer on top for experimenting.

    • Tiago: what’s the experience building with those images? Using on GitHub Actions or on-prem? Aloys: used on GitHub Actions. Tiago: what’s the performance like? Experience building on Azure Pipelines, spent a lot of time fetching the images. Andrew: this will happen if the images aren’t used on a regular basis. GitHub Actions is running on top of Azure Pipelines, but if the image isn’t being used regularly and “fall out of cache”. But on a semi-regular basis, will be reasonably quick. Andrew: using our images, should see a lot less of a speed bump when instantiating. JF: aws-common image isn’t unreasonably large. Aloys: not the leanest images, but waiting time should still be reasonable. JF: most of the ASWF projects are already using the images for their automated builds.

    • Tiago: to create these images and cppmm derived from common, have to be part of the organization? Andrew: all PR-based, don’t have to be a member of the organization, just to be a committer for the repository. Tiago: will submit a testing PR.

    • Can open a helpdesk ticket with LF RelEng to move a repo under the ASWF GitHub organization. https://support.linuxfoundation.org and there’s a “repository” ticket type.

    • Although a WG in ASWF, also started outside the ASFW, under vfx-rs organization, so would need to migrate.

  • Follow up on “VFX Platform adjacent” libraries: https://docs.google.com/spreadsheets/d/1yg2hls5pLRf4mimazfmI3ctQ-HsB2dRiiucIFNmaQzg/edit?usp=sharing

    • Aloys: added libz, probably a lot more. Script: install_yumpackages.yml

    • /lib vs /usr/lib unification

    • Are we OK with newer versions of Cmake? Larry: would definitely be useful. ASWF containers build their own cmake. Years of containers have a “modern at the time” version of Cmake. Larry: want to know what someone frozen on a specific year of Reference Platform would be building on.

    • JF: list can be documentation, and subset can be “frozen” in containers. Aloys: can run a script to figure out what’s really inside the images. Over time if we need more fine grain control, we can create packages for those. But of course it’s all extra work, so until we don’t want to use the OS-provided version, we may not want to do that. Eventually want to move to Windows, where none of these come from the system, so will need to pick a version for those (realizing that from work on Conan, and a lot already exist as Conan recipes). So could pick versions from Conan recipes. JF: but of course we don’t want to create our own Linux distribution.

  • Aswf-docker updates

    • 2022 images (OpenEXR 3.1?)

      • Not yet, but should be quick as well
    • Conan

      • Conan recipes from Conan Central has a thousand recipes for most of these packages, so we don’t have to maintain those recipes ourselves, we can mesh with the existing packages provided by Conan, and even provide pre-built binaries. If we go with a package management approach, we don’t have to provide a “whole OS’, but can leverage Conan, and there’s a lot of work already done there. Also applies to Windows and macOS. But on Linux, prefer to clone and massage Conan recipes to be more compatible with what we already have. A lot of libraries default to static libraries, we prefer a DSO based approach. So a lot of defaults in Conan Central don’t match.

      • Aloys: Have already converted a lot of “tool” packages in aswf-docker, based on recipes from Conan Central. Have Reference Platform 2019-2022 building, Qt is left to do. Then will start working on the VFX packages, so we will have a set of binary, pre-built packages ready for download.

      • Currently using personal Artifactory, may have an issue with the ASWF Artifactory, may not have enough compute power to build Qt without running out of time. Andrew: should be resolved some time in next couple of months, were on a call with GitHub Actions team, “premium runners” will be available soon, can select any size Azure instance to run as long as we pay for them, which ASWF will be able to do. Will include GPU. Also will be able to run on our own (LF) cloud, so can leverage AWS or OpenStack cloud. Coming soon!

      • Aloys: will reach out, can add me temporarily to ASWF Artifactory? At the moment it times out on Qt builds.

      • Aloys: Only have a couple of hours per week to work on this, but looks promising, and should “just work” on Windows, which is a nice change from the pure Docker container based approach. No testing on Windows yet, but hopeful.

    • Larry: request for OSL images. Aloys: will be adding OpenVDB and TBB to OSL images.

    • Aloys: will also be adding MaterialX, so may push those to main branch for 2022, but might wait for official OpenVDB 9, but some uncertainty as to whether it will make it. Currently building OpenVDB 8 but with a patch picking up OpenEXR 3.0. Larry: guessing that it will be minor / no change for OpenEXR 3.1 support, should be no compatibility changes required. Aloys: yes, not anticipating too much trouble. May put Conan work on hold to add TBB / OpenVDB to OSL.

    • JFrog for containers?

      • Do we want to look at this, or continue using Docker Hub (with paid program). Andrew: if you want people to pull your containers easily, Docker Hub is the place you want to be. If you are in a registry like GitHub or JFrog, people have to jump through hoops to use your containers. So we should stick to Docker Hub unless it becomes significantly more painful or expensive to use (we are currently paying $300/year). JFrog hasn’t shared any space restrictions on their public cloud storage. Aloys: once start uploading various clangs and others, may take a fair amount of space. Andrew: For Nexus systems, release repos should be the release artifacts, for test repos, there should be policies to purge older artifacts from that storage. For instance on Nexus java-staging repos only maintain 1 month.
  • MacStadium macOS system credentials vnc://207.254.41.215

  • PGP infrastructure for ASWF projects for signing releases and artifacts?

    • Example: jq downloads https://stedolan.github.io/jq/download/

    • LF RelEng putting finishing steps to set up Sigul service (PGP and certificate signing system for artifacts designed for Fedora / CentOS infrastructure), will be accessible from GitHub Actions. Have been running that infrastructure for Jenkins systems for 4-5 years, now making it available more publicly. Will still need proper credentials to get things signed. Will also put together some GitHub Actions to sign and release. Don’t do embedded signatures, only detached signatures. Also do git tagging.

    • Aloys: would this work uploading to Artifactory? Andrew: Sigul has a Python client (not in PyPI!) for the agent itself, agent needs 2 pieces of information: a password to a certificate it uses to communicate to a bridge, which is communicated to by a signing server. So when you want to sign a git tag or artefact, pass this information, encrypted channel to the bridge, server passes back the signature via the bridge. Can only do embedded signatures with RPMs, or do detached signatures. For a git tag, can push that into the repo. For signature files (.asc files, ASCII armor), just store them besides the artefact. Currently do it with Nexus, should work the same on Artifactory.

Follow Ups