summaryrefslogtreecommitdiff
path: root/ick_files.mdwn
blob: 7313706c84f595aa7a7307d4ce4921ea6d8f5e47 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
[[!meta title=".ick files"]]

Ick files
-----------------------------------------------------------------------------

An `.ick` file is the input to the `icktool make-it-so` command. It
uses YAML syntax, and has two top-level entries: `projects` and
`pipelines`. Each of those is a list, one of projecs to build, and one
of pipelines.

A project has a name, optionally defines some parameters, and lists
one or more pipelines. A pipeline is a re-useable sequence of build
actions to achieve a goal, such as checking out code from version
control, or building binaries from source code.

You can roughly think of a pipeline as a subroutine that the project
can call. Each pipeline gets all the project parameters, and can do
things based on the parameter values.

In the example above, there is one project, called `say_hello`, which
defines one parameters, `target`, with the value `world`. It uses one
pipeline, `greet`.

There is one pipeline, `greet`, which accepts the paremeter `target`,
and consists of one action, which consists of running a snippet of
shell code on the build host. The snippet extracts the value of the
parameter. The snippet uses a pre-defined shell function `params`,
which outputs the values of all parameters, and extracts the value of
the `target` parameter with the `jq` tool.

Actions are implemented and executed by the worker manager, which is
an agent running on each build host. The worker manager queries the
controller for an action to execute, executes it, and reports any
output and exit code back to the controller. The controller keeps
track of all projects, pipelines, builds, build output, and build
results.

Note that `icktool make-it-so` does not touch in any way project or
pipeline resources not mentioned in the .ick file. If the controller
knows of a project foo, but it is not in the .ick file, the foo
project is not modified, or deleted. It remains exactly as it was.
Also, projects may reference any pipelines the controller knows about,
even if not mentioned in the .ick file.


Pipelines: "where"
-----------------------------------------------------------------------------

Each action in a pipeline MUST specify where it is run. There are
three possibilities:

* `host` – directly on the host
* `chroot` – in a chroot of the workspace
* `container` – in a container using a defined system tree
  (systree) and workspace bind-mounted at /workspace

You should strive to run all actions in containers, since that makes
it hardest to mess up the worker host.

Actions are run as root in the container, and in a chroot. On the
host, they can use `sudo` to gain root access.


Pipelines: actions
-----------------------------------------------------------------------------

The worker manager defines (at least) the following actions:

* `shell: snippet`

    Run `snippet` using the shell. The cwd is the workspace. The shell
    function `params` outputs all parameters as JSON. The `jq` tool
    can be used to extract the value of a specific parameter.

* `python: snippet`

    Run `snippet` using Python 3. The cwd is the workspace. The global
    variable `params` is a dict with all parameters.

* `debootstrap: auto`

    Run the debootstrap command in the workspace. If the value given
    to the `debootstrap` key is is `auto`, the Debian dist installed
    into the workspace is taken from the parameter `debian_codename`.
    The dist can also be named explicitly as the value.

    The mirror defaults to `http://deb.debian.org/debian`, but can be
    given explicitly, if needed.

* `archive: workspace`

    Take the current content of the workspace, put it in a compressed
    tar archive, and upload it to the artifact store. Get the name of
    the artifact from the parameter named in the `name_from` field, if
    given, and the`artifact_name` parameter if not.

    Optionally archive only parts of the workspace. User can give a
    list of globs in the `globs` field, and anything that matches any
    of the shell filename patterns will be included. Default is to
    archive everything.

    Example:

        - archive: workspace
          name_from: debian_packages
          globs:
          - "*_*"

    This would archive everything at the root of the workspace with an
    underscore in the name. The artifact will be named using the value
    of the `debian_packages` project parameter.

* `archive: systree`

    This is identical to `archive: workspace`, except it archives the
    system tree instead of the workspace.

* `action: populate_systree`

    Get an artifact, from the artifact store, and unpack it as the
    systree. The artifact is assumed to be a compressed tar archive.
    The name of the artifact comes from the `systree_name` parameter,
    or the parameter named in the `name_from` field.

    If the artifact does not exist, end the build with failure.

* `action: populate_workspace`

    Identical to `action: populate_systree`, except it unpacks to the
    workspace. The default parameter name is `workspace_name`. If the
    artifact does not exist, do nothing.

* `action: git`

    Clone the git repository given in `git_url` into the directory
    named in `git_dir`, checking out the branch/tag/commit named in
    `git_ref`. If the directory already exists, do a `git remote
    update` inside it instead.

    This is intended to do all the networking parts of getting source
    code from a git server. It should be run with `where: host`, so it
    has network access and can use the worker's ssh key (for
    non-public repositories).

    Further actions, inside a container, can do other operations.

* `action: git_mirror`

    This will be replacing the `git` action. It has not yet been
    implemented. You can use the `git` action as before, for now.
    Eventually the `git` action will be removed, after a suitable
    transition period.

    Mirror one or more git repositories specified in the `sources`
    project variable. The parameter is expected to have as its value a
    list of dicts, each dict containing the following fields:

    * `name` — name of the repository
    * `repo` — URL of the repository
    * `location` — ignored by `git_mirror`, but can be used to
      specify where the clone'd repository is to be checked out
    * `ref` — ignored by `git_mirror`, but can be used to
      specify which ref (branch or tag or commit) should be checked out

    Additionally, `git_mirror` will use the `git_repo_base` project
    parameter: if the `repo` field is a relative URL, it will be
    joined with the `git_repo_base` value to form the full URL. If
    `repo` is a full URL, it is used as is.

    Note that `git_mirror` does NOT do the checking out, only the
    initial mirroring (as if by `git clone --mirror $repo
    .mirror/$name`) or updating of the mirror (as if  by `cd
    .mirror/$name && git remote update --prune`).

    Ick provides a pipeline that uses the `git_mirror` action and
    additional `shell` or `python` actions to do the checkout.

* `action: rsync`

    Copy the content of a directory in the workspace (named in
    `rsync_src`) to a remote server (named in `rsync_target`. This
    should be run on the host, to have network access and use of the
    worker's ssh key.

* `action: dput`

    Upload all Debian .changes files to the APT repository that is
    part of an ick cluster. No parameters needed.

    This should be run on the host, to have network access and use of
    the worker's ssh key.

* `action: notify`

    Use the notification service to send notifications of the current
    state of the build. You usually don't need to use these actions
    manually, the controller adds a notification action automatically
    when a build ends.


Notifications
-----------------------------------------------------------------------------

Ick can send an email when a build ends. This is done automatically.
Set the project parameter `notify` to a list of email addresses that
should be notified.


Standard pipelines
-----------------------------------------------------------------------------

Ick comes with a set of "standard" pipelines, named `ick/get_sources`
and the like. They are stored and documented in the ick source code:
<http://git.liw.fi/ick2/tree/pipelines>

The standard pipelines are not installed automatically. The ick
instance admin needs to install and update them manually.


Example .ick files
-----------------------------------------------------------------------------

The <http://git.liw.fi/liw-ci/tree/> repository has some .ick files,
used by ick's author for his personal instances.


General advice
-----------------------------------------------------------------------------

On each ick instance, set up one or more projects that build systrees.
Build one for each target Debian release: `jessie`, `stretch`,
`unstable`, etc. Build each of these projects to populate the artifact
store with systree artifacts. Each actual project should use one of
the artifacts to do the build in a container.

You can use the `ick/build_debian_systree` pipeline to build systrees
with Debian.

When building in a container, you can install more things inside the
container. If you build repeatedly, it may be worth having more than
just the minimal container. Instead have an artifact with all the
build dependencies installed, so each build does not have to start
with installing them, saving time.

Ick can build several things at once. As many things as there are
workers. However, each build (currently) runs on one worker.

There's currently no way to stop a triggered build. This is a missing
feature.