Contributing packages
The contribution process can be broken down into three steps:
-
Step 1. Staging process (add recipe and license).
With the help of the staging process, add a package's recipe and license to the staged-recipes repository and create a PR.
-
Step 2. Post staging process.
Once your PR, has been merged, take a look at our Post staging process to know what follows.
-
Step 3. Maintaining the package.
Contributing a package to
conda-forge
makes you the maintainer of that package. Learn more about the roles of a maintainer.
The sections below will add more details about each step.
The staging process
The staging process i.e adding a package's recipe has three steps:
- Generating the recipe
- Checklist
- Feedback and revision
Generating the recipe
-
If it is an R package from CRAN, a Python package from PyPI, a Perl package from CPAN, or a Lua package from Luarocks, generate a recipe with
rattler-build generate-recipe
. Then if necessary, you can make manual edits to the recipe.noteInstallation and usage of
rattler-build
:-
pixi exec rattler-build generate-recipe INDEX_NAME PACKAGE_NAME
. ReplaceINDEX_NAME
with the name of the index (e.g.cran
) andPACKAGE_NAME
with the name of the package.
You do not have to use
rattler-build
, and the recipes produced byrattler-build
might need to be reviewed and edited. Read more in the rattler-build recipe generation documentation. -
If it's none of the above, generate a recipe with the help of the example recipe in the staged-recipes repository, and the rattler-build documentation, including the full recipe specification.
Your final recipe should have no comments (unless they're actually relevant to the recipe, not just generic instruction comments), and follow the order in the example.
If there are any details you are not sure about please create a pull request anyway. The conda-forge team will review it and help you make changes to it.
In case you are building your first recipe using conda-forge, a step-by-step instruction and checklist that will help you with a successful build is provided below.
Step-by-step Instructions
- Ensure your source code can be downloaded as a single file. Source code should be downloadable as an archive (.tar.gz, .zip, .tar.bz2, .tar.xz) or tagged on GitHub, to ensure that it can be verified. (For further detail, see Build from tarballs, not repos).
- Fork and clone the staged-recipes repository from GitHub.
- Checkout a new branch from the staged-recipes
main
branch. - Through CLI,
cd
inside thestaged-recipes/recipes
directory. - Within your forked copy, create a new folder in the recipes folder for your package (i.e,
...staged-recipes/recipes/<name-of-package>
) - Copy
recipe.yaml
from the example directory. All the changes in the following steps will happen in the COPIED recipe.yaml (i.e.,...staged-recipes/recipes/<name-of-package>/recipe.yaml
). Please leave the example directory unchanged! - Modify the copied recipe (
recipe.yml
) as needed. To see how to modifyrecipe.yaml
, take a look at the rattler-build documentation. - Generate the SHA256 key for your source code archive, as described in the
example recipe using the
openssl
tool. Alternatively, for packages from PyPI, you can also go to the package description on PyPI, from which you can directly copy the SHA256. - Be sure to fill in the
test
section. The simplest test will simply test that the module can be imported, as described in the example. - Remove all irrelevant comments in the
recipe.yaml
file.
Be sure not to checksum the redirection page. Therefore use, for example,:
curl -sL https://github.com/username/reponame/archive/vX.X.X.tar.gz | openssl sha256
Checklist
- Ensure that the license and license family descriptors (optional) have the right case and that the license is correct. Note that case sensitive inputs are required (e.g. Apache-2.0 rather than APACHE 2.0). Using SPDX identifiers for license field is recommended. (see SPDX Identifiers and Expressions)
- Ensure that you have included a license file if your license requires one – most do. (see this section of the example recipe)
- In case your project has tests included, you need to decide if these tests should be executed while building the conda-forge feedstock.
- Make sure that all tests pass successfully at least on your development machine.
- Recommended: run the tests locally on your source code to ensure the recipe works locally (see Running tests locally for staged recipes).
- Make sure that your changes do not interfere with other recipes that are in the
recipes
folder (e.g. theexample
recipe).
Feedback and revision
Once you finished your PR, all you have to do is wait for feedback from our review team.
The review team will assist you by pointing out improvements and answering questions. Once the package is ready, the reviewers will approve and merge your pull request.
After merging the PR, our CI infrastructure will build the package and make it available in the conda-channel.
If you have questions or have not heard back for a while, you can notify us by including @conda-forge/staged-recipes
in your GitHub message.
Post staging process
- After the PR is merged, our CI services will create a new git repo automatically. For example, the recipe for a package named
pydstool
will be moved to a new repository https://github.com/conda-forge/pydstool-feedstock. This process is automated through a CI job on theconda-forge/staged-recipes
repo. It sometimes fails due to API rate limits and will automatically retry itself. If your feedstock has not been created after a day or so, please get in touch with theconda-forge/core
team for help. - CI services will be enabled automatically and a build will be triggered automatically which will build the conda package and upload to https://anaconda.org/conda-forge
- If this is your first contribution, you will be added to the conda-forge team and be given access to the CI services so that you can stop and restart builds. You will also be given commit rights to the new git repository.
- If you want to make a change to the recipe, send a PR to the git repository from a fork. Branches of the main repository are used for maintaining different versions only.
Feedstock repository structure
Once the PR containing the recipe for a package is merged in the staged-recipes
repository, a new repository is created automatically called <package-name>-feedstock
.
A feedstock is made up of a conda recipe (the instructions on what and how to build the package) and the necessary configuration files for automatic builds using freely available continuous integration (CI) services.
Each feedstock contains various files that are generated automatically using our automated provisioning tool conda-smithy. Broadly every feedstock has the following files:
recipe
This folder contains the recipe.yaml
file and any other files/scripts needed to build the package.
LICENSE.txt
This file is the license for the recipe itself. This license is different from the package license, which you define while submitting the package recipe using license_file
in the recipe.yaml
file.
CI-files
These are the CI configuration files for service providers like Azure and TravisCI.
conda-forge.yml
This file is used to configure how the feedstock is set up and built. Making any changes in this file usually requires Rerendering feedstocks.
Maintainer role
The maintainer's job is to:
- Keep the feedstock updated by merging eventual maintenance PRs from conda-forge's bots.
- Keep the feedstock on par with new releases of the source package by:
- Bumping the version number and checksum.
- Making sure that the feedstock's requirements stay accurate.
- Make sure the test requirements match those of the updated package.
- Answer eventual questions about the package on the feedstock issue tracker.
Adding multiple packages at once
If you would like to add multiple related packages, they can be added to
staged-recipes in a single pull request (in separate directories). If the
packages are interdependent (i.e. one package being added lists one or more of
the other packages being added as a requirement), the build script will be able to
locate the dependencies that are only present within staged-recipes as long as
the builds finish in the dependencies order. Using a single pull request
allows you to quickly get packages set up without waiting for each package in a
dependency chain to be reviewed, built, and added to the conda-forge
channel
before starting the process over with the next recipe in the chain.
When PRs with multiple interdependent recipes are merged, there may be an error if a build finishes before its dependency is built. If this occurs, you can trigger a new build by pushing an empty commit.
git commit --amend --no-edit && git push --force
Synchronizing fork for future use
If you would like to add additional packages in the future, you will need to reset your fork of staged-recipes before creating a new branch on your fork, adding the new package directory/recipe, and creating a pull request. This step ensures you have the most recent version of the tools and configuration files contained in the staged-recipes repository and makes the pull request much easier to review. The following steps will reset your fork of staged-recipes and should be executed from within a clone of your forked staged-recipes directory.
- Checkout your main branch:
git checkout main
- Define the conda-forge/staged-recipes repository as
upstream
(if you have not already done so).:git remote add upstream https://github.com/conda-forge/staged-recipes.git
- Pull all of the upstream commits from the upstream main branch.:
git pull --rebase upstream main
- Push all of the changes to your fork on GitHub (make sure there are not any changes on GitHub that you need because they will be overwritten).:
git push origin main --force
Once these steps are complete, you can continue with the steps in Step-by-step Instructions to stage your new package recipe using your existing staged-recipes fork.
The recipe.yaml
The recipe.yaml
file in the recipe directory is at the heart of every conda package.
It defines everything that is required to build and use the package.
A full reference of the structure and fields of recipe.yaml
file can be found in the
rattler-build recipe file documentation.
In the following, we highlight particularly important and conda-forge specific information and guidelines, ordered by section in recipe.yaml
.
Source
Build from tarballs, not repos
Packages should be built from tarballs using the url
key, not from repositories directly by using e.g. git
.
There are several reasons behind this rule:
- Repositories are usually larger than tarballs, draining shared CI time and bandwidth.
- Repositories are not checksummed. Thus, using a tarball has a stronger guarantee that the download that is obtained to build from is in fact the intended package.
- On some systems, it is possible to not have permission to remove a repo once it is created.
Populating the hash
field
If your package is on PyPI, you can get the sha256 hash from your package's page
on PyPI; look for the SHA256
link next to the download link on your package's
files page, e.g. https://pypi.org/project/<your-project>/#files
.
You can also generate a hash from the command line on Linux (and Mac if you install the necessary tools below).
To generate the sha256
hash: pixi exec openssl sha256 your_sdist.tar.gz
Be sure not to checksum the redirection page. Therefore use, for example,:
pixi exec --with openssl curl -sL https://github.com/username/reponame/archive/vX.X.X.tar.gz | openssl sha256
Downloading extra sources and data files
rattler-build
supports multiple sources per recipe: see
the relevant documentation section.
Build
Skipping builds
Use the skip
key in the build
section.
You can e.g. specify not to build on specific architectures:
build:
skip:
- win
...
Optional: bld.bat
and/or build.sh
In many cases, bld.bat
and/or build.sh
files are not required.
Pure Python packages almost never need them.
If the build can be executed with one line, you may put this line in the
script
entry of the build
section of the recipe.yaml
file with:
script: "{{ PYTHON }} -m pip install . -vv"
.
Remember to always add pip
to the host requirements.
Use pip
Normally Python packages should use this line:
build:
script: "{{ PYTHON }} -m pip install . -vv"
as the installation script in the recipe.yml
file or bld.bat/build.sh
script files,
while adding pip
to the host requirements:
requirements:
host:
- pip
Usually pure-Python packages only require python
, setuptools
and pip
as host
requirements; the real package dependencies are only
run
requirements.
Requirements
Build, host and run
rattler-build distinguishes three different kinds of dependencies. In the following paragraphs, we give a very short overview of which packages go where. For a detailed explanation please refer to the rattler-build documentation.
Build
Build dependencies are required in the build environment and contain all tools that are not needed on the host of the package.
The following packages are examples of typical build
dependencies:
- compilers (see Compilers)
- cmake
- make
- pkg-config
- CDT packages (see Core Dependency Tree Packages (CDTs))
Host
Host dependencies are required during the build phase, but in contrast to build packages they have to be present on the host.
The following packages are typical examples for host
dependencies:
- shared libraries (c/c++)
- python/r libraries that link against c libraries (see e.g. Building Against NumPy)
- python, r-base
- setuptools, pip (see Use pip)
Run
Run dependencies are only required during run time of the package. Run dependencies typically include
- most python/r libraries
Avoid external dependencies
As a general rule: all dependencies have to be packaged by conda-forge as well. This is necessary to assure ABI compatibility for all of our packages.
There are only a few exceptions to this rule:
- Some dependencies have to be satisfied with CDT packages (see Core Dependency Tree Packages (CDTs)).
- Some packages require root access (e.g. device drivers) that cannot be distributed by conda-forge. These dependencies should be avoided whenever possible.
Pinning
Linking shared c/c++ libraries creates dependence on the ABI of the library that was used at build time on the package. The exposed interface changes when previously existing exposed symbols are deleted or modified in a newer version.
It is therefore crucial to ensure that only library versions with a compatible ABI are used after linking.
In the best case, the shared library you depend on:
- defines a pin in the list of globally pinned packages
- exports its ABI compatible requirements by defining
run_exports
in it's meta.yaml
In these cases you do not have to worry about version requirements:
requirements:
# [...]
host:
- readline
- libpng
In other cases you have to specify ABI compatible versions manually.
requirements:
# [...]
host:
- libawesome ==1.1.*
For more information on pinning, please refer to Pinned dependencies.
Constraining packages at runtime
The run_constraints
section allows defining restrictions on packages at runtime without depending on the package. It can be used to restrict allowed versions of optional dependencies and define incompatible packages.
Defining non-dependency restrictions
Imagine a package can be used together with version 1 of awesome-software
when present, but does not strictly depend on it.
Therefore you would like to let the user choose whether he/she would like to use the package with or without awesome-software
. Let's assume further that the package is incompatible to version 2 of awesome-software
.
In this case run_constraints
can be used to restrict awesome-software
to version 1.*, if the user chooses to install it:
requirements:
# [...]
run_constraints:
- awesome-software ==1.*
Here run_constraints
acts as a means to protect users from incompatible versions without introducing an unwanted dependency.
Defining conflicts
Sometimes packages interfere with each other and therefore only one of them can be installed at any time.
In combination with an unsatisfiable version, run_constraints
can define blockers:
package:
name: awesome-db
requirements:
# [...]
run_constraints:
- amazing-db ==9999999999
In this example, awesome-db
cannot be installed together with amazing-db
as there is no package amazing-db-9999999999
.
Test
All recipes need tests. Here are some tips, tricks, and justifications. How you should test depends on the type of package (python, c-lib, command-line tool, … ), and what tests are available for that package. But every conda package must have at least some tests.
Simple existence tests
Sometimes defining tests seems to be hard, e.g. due to:
- tests for the underlying code base may not exist.
- test suites may take too long to run on limited CI infrastructure.
- tests may take too much bandwidth.
In these cases, conda-forge may not be able to execute the prescribed test suite.
However, this is no reason for the recipe to not have tests. At the very least, we want to verify that the package has installed the desired files in the desired locations. This is called existence testing.
Read more in the rattler-build documentation for package contents tests.
Running tests locally for staged recipes
If you want to run and build packages in the staged-recipes repository locally,
go to the root repository directory and execute pixi run build-linux
or pixi run build-osx
, matching your
operating system.
Then, follow the prompt to select the variant you'd like to build.
This requires that you have Docker installed on your machine if you are building a package for Linux.
For MacOS, it will prompt you to select a location for the SDK (e.g. export OSX_SDK_DIR=/opt
) to be downloaded.
$ cd ~/staged-recipes
$ pixi run build-osx
If you know which image you want to build, you can specify it as an argument to the script.
$ cd ~/staged-recipes
$ pixi run build-osx <VARIANT>
where <VARIANT>
is one of the file names in the .ci_support/
directory, e.g. linux64
, osx64
, and linux64_cuda<version>
.
About
Packaging the license manually
Sometimes upstream maintainers do not include a license file in their tarball despite it being demanded by the license.
If this is the case, you can add the license to the recipe
directory (here named LICENSE.txt
) and reference it inside the recipe.yaml:
about:
license_file: LICENSE.txt
In this case, please also notify the upstream developers that the license file is missing.
The license should only be shipped along with the recipe if there is no license file in the downloaded archive.
If there is a license file in the archive, please set license_file
to the path of the license file in the archive.
SPDX Identifiers and Expressions
For the about: license
entry in the recipe.yaml
, using a SPDX identifier or expression is recommended.
See SPDX license identifiers for the licenses. See SPDX license exceptions for license exceptions. See SPDX specification Annex D for the specification on expressions. Some examples of these are:
Apache-2.0
Apache-2.0 WITH LLVM-exception
BSD-3-Clause
BSD-3-Clause OR MIT
GPL-2.0-or-later
LGPL-2.0-only OR GPL-2.0-only
LicenseRef-HDF5
MIT
MIT AND BSD-2-Clause
PSF-2.0
Unlicense
Licenses of included dependencies
For some languages (Go, rust, etc.), the current policy is to include all dependencies and their dependencies in the package. This presents a problem when packaging the license files as each dependency needs to have its license file included in the recipe.
For some languages, the community provides tools which can automate this process, enabling the automatic inclusion of all needed license files.
Rust
cargo-bundle-licenses can be included in the build process of a package and will automatically collect and add the license files of all dependencies of a package.
For a detailed description, please visit the project page but a short example can be found below.
First, include the collection of licenses as a step of the build process.
build:
number: 0
script:
- cargo-bundle-licenses --format yaml --output THIRDPARTY.yml
- build_command_goes_here
Then, include the tool as a build time dependency.
requirements:
build:
- cargo-bundle-licenses
Finally, make sure that the generated file is included in the recipe.
about:
license_file:
- THIRDPARTY.yml
- package_license.txt
Go
go-licenses can be included in the build process of a package and will automatically collect and add the license files of all dependencies of a package.
For a detailed description, please visit the project page but a short example can be found below.
First, include the collection of licenses as a step of the build process.
build:
number: 0
script:
- go build [...]
- go-licenses save . --save_path="./license-files/"
Then, include the tool as a build time dependency.
requirements:
build:
- ${{ compiler('go') }}
- go-licenses
Finally, make sure that the generated file is included in the recipe.
about:
license_file:
- LICENSE
- license-files/
We are not lawyers and cannot guarantee that the above advice is correct or that the tools are able to find all license files. Additionally, we are unable to accept any responsibility or liability. It is always your responsibility to double-check that all licenses are included and verify that any generated output is correct.
The correct and automated packaging of dependency licenses is an ongoing discussion. Please feel free to add your thoughts.
Extra
Recipe Maintainer
A maintainer is an individual who is responsible for maintaining and updating one or more feedstock repositories and packages as well as their future versions. They have push access to the feedstock repositories of only the packages they maintain and can merge pull requests into it.
Contributing a recipe for a package makes you the maintainer
of that package automatically.
See Maintainer role and Maintaining packages to learn more about the things that maintainers do.
If you wish to be a maintainer of a certain package, you should contact current maintainers and open an issue in that package's feedstock with the following command:
@conda-forge-admin, please add user @username
where username is the GitHub username of the new maintainer to be added. Please refer to Becoming a maintainer and Updating the maintainer list for detailed instructions.
Feedstock name
If you want the name of the feedstock to be different from the package name in the staged-recipes, you can use the feedstock-name
directive in the recipe of that package, like this:
extra:
feedstock-name: <name>
Here, <name>
is the name you would want for the feedstock.
If not specified, the name will be taken from the top-level name
field in meta.yaml
.
Miscellaneous
Activate scripts
Recipes are allowed to have activate scripts, which will be source
d or
call
ed when the environment is activated. It is generally recommended to avoid using
activate scripts when another option is possible because people do not always
activate environments the expected way and these packages may then misbehave.
When using them in a recipe, feel free to name them activate.bat
,
activate.sh
, deactivate.bat
, and deactivate.sh
in the recipe. The
installed scripts are recommended to be prefixed by the package name and a
separating -
. Below is some sample code for Unix and Windows that will make
this install process easier. Please feel free to lift it.
In build.sh
:
# Copy the [de]activate scripts to $PREFIX/etc/conda/[de]activate.d.
# This will allow them to be run on environment activation.
for CHANGE in "activate" "deactivate"
do
mkdir -p "${PREFIX}/etc/conda/${CHANGE}.d"
cp "${RECIPE_DIR}/${CHANGE}.sh" "${PREFIX}/etc/conda/${CHANGE}.d/${PKG_NAME}_${CHANGE}.sh"
done
In build.bat
:
setlocal EnableDelayedExpansion
:: Copy the [de]activate scripts to %PREFIX%\etc\conda\[de]activate.d.
:: This will allow them to be run on environment activation.
for %%F in (activate deactivate) DO (
if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d
copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat
:: Copy unix shell activation scripts, needed by Windows Bash users
copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh
)
Jinja templating
See the Jinja section of the rattler-build documentation for information on simplifying recipes with templating.