summaryrefslogtreecommitdiff
path: root/ick_files.mdwn
diff options
context:
space:
mode:
authorLars Wirzenius <liw@liw.fi>2018-07-29 13:47:54 +0300
committerLars Wirzenius <liw@liw.fi>2018-07-29 13:47:54 +0300
commita1fa5f264234b3c6cb816fccf1d1b5066b1d85ad (patch)
tree7406c811833a106797a5fa69238a2194260c7dab /ick_files.mdwn
parentbdb116582b5cb1e83dbb8cb5948e1b5a96cd9ac3 (diff)
downloadick.liw.fi-a1fa5f264234b3c6cb816fccf1d1b5066b1d85ad.tar.gz
Change: split icktool page into smaller pages
Diffstat (limited to 'ick_files.mdwn')
-rw-r--r--ick_files.mdwn249
1 files changed, 249 insertions, 0 deletions
diff --git a/ick_files.mdwn b/ick_files.mdwn
new file mode 100644
index 0000000..55bb586
--- /dev/null
+++ b/ick_files.mdwn
@@ -0,0 +1,249 @@
+[[!meta title=".ick files"]]
+
+Ick files
+-----------------------------------------------------------------------------
+
+An `.ick` file is the input to the `icktool make-it-so` command. It
+uses YAML syntax, and has two top-level entries: `projects` and
+`pipelines`. Each of those is a list, one of projecs to build, and one
+of pipelines.
+
+A project has a name, optionally defines some parameters, and lists
+one or more pipelines. A pipeline is a re-useable sequence of build
+actions to achieve a goal, such a checking out code from version
+control, or building binaries from source code.
+
+You can roughly think of a pipeline as a subroutine that the project
+can call. Each pipeline gets all the project parameters, and can do
+things based on the parameter values.
+
+In the example above, there is one project, called `say_hello`, which
+defines one parameters, `target`, with the value `world`. It uses one
+pipeline, `greet`.
+
+There is one pipeline, `greet`, which accepts the paremeter `target`,
+and consists of one action, which consists of running a snippet of
+shell code on the build host. The snippet extracts the value of the
+parameter. The snippet uses a pre-defined shell function `params`,
+which outputs the values of all parameters, and extracts the value of
+the `target` parameter with the `jq` tool.
+
+Actions are implemented and executed by the worker manager, which is
+an agent running on each build host. The worker manager queries the
+controller for an action to execute, executes it, and reports any
+output and exit code back to the controller. The controller keeps
+track of all projects, pipelines, builds, build output, and build
+results.
+
+Note that `icktool make-it-so` does not touch in any way project or
+pipeline resources not mentioned in the .ick file. If the controller
+knows of a project foo, but it is not in the .ick file, the foo
+project is not modified, or deleted. It remains exactly as it was.
+Also, projects may reference any pipelines the controller knows about,
+even if not mentioned in the .ick file.
+
+
+Pipelines: "where"
+-----------------------------------------------------------------------------
+
+Each action in a pipeline MUST specify where it is run. There are
+three possibilities:
+
+* `host` &ndash; directly on the host
+* `chroot` &ndash; in a chroot of the workspace
+* `container` &ndash; in a container using a defined system tree
+ (systree) and workspace bind-mounted at /workspace
+
+You should strive to run all actions in conatiners, since that makes
+it hardest to mess up the worker host.
+
+Actions are run as root in the container, and in a chroot. On the
+host, they can use `sudo` to gain root access.
+
+
+Pipelines: actions
+-----------------------------------------------------------------------------
+
+The worker manager defines (at least) the following actions:
+
+* `shell: snippet`
+
+ Run `snippet` using the shell. The cwd is the workspace. The shell
+ function `params` outputs all parameters as JSON. The `jq` tool
+ can be used to extract the value of a specific parameter.
+
+* `python: snippet`
+
+ Run `snippet` using Python 3. The cwd is the workspace. The global
+ variable `params` is a dict with all parameters.
+
+* `debootstrap: auto`
+
+ Run the debootstrap command in the workspace. If the value given
+ to the `debootstrap` key is is `auto`, the Debian dist installed
+ into the workspace is taken from the parameter `debian_codename`.
+ The dist can also be named explicitly as the value.
+
+ The mirror defaults to `http://deb.debian.org/debian`, but can be
+ given explicitly, if needed.
+
+* `archive: workspace`
+
+ Take the current content of the workspace, put it in a compressed
+ tar archive, and uplad it to the artifact store. Get the name of
+ the artifact from the parameter named in the `name_from` field, if
+ given, and the`artifact_name` parameter if not.
+
+ Optionally archive only parts of the workspace. User can give a
+ list of globs in the `globs` field, and anything that matches any
+ of the shell filename patterns will be included. Default is to
+ archive everything.
+
+ Example:
+
+ - archive: workspace
+ name_from: debian_packages
+ globs:
+ - "*_*"
+
+ This would archive everything at the root of the workspace with an
+ underscore in the name. The artifact will be named using the value
+ of the `debian_packages` project parameter.
+
+* `archive: systree`
+
+ This is identical to `archive: workspace`, except it archives the
+ system tree instead of the workspace.
+
+* `action: populate_systree`
+
+ Get an artifact, from the artifact store, and unpack it as the
+ systree. The artifact is assumed to be a compressed tar archive.
+ The name of the artifact comes from the `systree_name` parameter,
+ or the parameter named in the `name_from` field.
+
+ If the artifact does not exist, end the build with failure.
+
+* `action: populate_workspace`
+
+ Identical to `action: populate_systree`, except it unpacks to the
+ workspace. The default parameter name is `workspace_name`. If the
+ artifact does not exist, do nothing.
+
+* `action: git`
+
+ Clone the git repository given in `git_url` into the directory
+ named in `git_dir`, checking out the branch/tag/commit named in
+ `git_ref`. If the directory already exists, do a `git remote
+ update` inside it instead.
+
+ This is intended to do all the networking parts of getting source
+ code from a git server. It should be run with `where: host`, so it
+ has network access and can use the worker's ssh key (for
+ non-public repositories).
+
+ Further actions, inside a container, can do other operations.
+
+* `action: git_mirror`
+
+ This will be replacing the `git` action. It has not yet been
+ implemented. You can use the `git` action as before, for now.
+ Eventually the `git` action will be removed, after a suitable
+ transition period.
+
+ Mirror one or more git repositories specified in the `sources`
+ project variable. The parameter is expected to have as its value a
+ list of dicts, each dict containing the following fields:
+
+ * `name` &mdash; name of the repository
+ * `repo` &mdash; URL of the repository
+ * `location` &mdash; ignored by `git_mirror`, but can be used to
+ specify where the clone'd repository is to be checked out
+ * `ref` &mdash; ignored by `git_mirror`, but can be used to
+ specify which ref (branch or tag or commit) should be checked out
+
+ Additionally, `git_mirror` will use the `git_repo_base` project
+ parameter: if the `repo` field is a relative URL, it will be
+ joined with the `git_repo_base` value to form the full URL. If
+ `repo` is a full URL, it is used as is.
+
+ Note that `git_mirror` does NOT do the checking out, only the
+ initial mirroring (as if by `git clone --mirror $repo
+ .mirror/$name`) or updating of the mirror (as if by `cd
+ .mirror/$name && git remote update --prune`).
+
+ Ick provides a pipeline that uses the `git_mirror` action and
+ addtional `shell` or `python` actions to do the checkout.
+
+* `action: rsync`
+
+ Copy the content of a directory in the workspace (named in
+ `rsync_src`) to a remote server (named in `rsync_target`. This
+ should be run on the host, to have network access and use of the
+ worker's ssh key.
+
+* `action: dput`
+
+ Upload all Debian .changes files to the APT repository that is
+ part of an ick cluster. No parameters needed.
+
+ This should be run on the host, to have network access and use of
+ the worker's ssh key.
+
+* `action: notify`
+
+ Use the notification service to send notifications of the current
+ state of the build. You usually don't need to use these actions
+ manually, the controller adds a notification action automatically
+ when a build ends.
+
+
+Notifications
+-----------------------------------------------------------------------------
+
+Ick can send an email when a build ends. This is done automatically.
+Set the project parameter `notify` to a list of email addresses that
+should be notified.
+
+
+Standard pipelines
+-----------------------------------------------------------------------------
+
+Ick comes with a set of "standard" pipelines, named `ick/get_sources`
+and the like. They are stored and documented in the ick source code:
+<http://git.liw.fi/ick2/tree/pipelines>
+
+The standard pipelines are not installed automatically. The ick
+instance admin needs to install and update them manually.
+
+
+Example .ick files
+-----------------------------------------------------------------------------
+
+The <http://git.liw.fi/liw-ci/tree/> repository has some .ick files,
+used by ick's author for his personal instances.
+
+
+General advice
+-----------------------------------------------------------------------------
+
+On each ick instance, set up one or more projects that build systrees.
+Build one for each target Debian release: `jessie`, `stretch`,
+`unstable`, etc. Build each of these projects to populate the artifact
+store with systree artifacts. Each actual project should use one of
+the artifacts to do the build in a container.
+
+You can use the `ick/build_debian_systree` pipeline to build systreees
+with Debian.
+
+When building in a container, you can install more things inside the
+container. If you build repeatedly, it may be worth having more than
+just the minimal container. Instead have an artifact with all the
+build dependencies installed, so each build does not have to start
+with installing them, saving time.
+
+Ick can build several things at once. As many things as there are
+workers. However, each build (currently) runs on one worker.
+
+There's currently no way to stop a triggered build. This is a missing
+feature.