Standard is a nifty DevOps framework that enables an efficient Software Delivery Life Cycle (SDLC) with the power of Nix via Flakes.
It organizes and disciplines your Nix and thereby speeds you up. It also comes with great horizontal integrations of high quality vertical DevOps tooling crafted by the Nix Ecosystem.
Stack
Integrations
The Standard Story
Once your nix
code has evolved into a giant
ball of spaghetti and nobody else except a few
select members of your tribe can still read it
with ease; and once to the rest of your colleagues
it has grown into an impertinence, then std
brings the overdue order to your piece of art
through a well-defined folder structure and
disciplining generic interfaces.
With std
, you’ll learn how to organize your nix
flake outputs (‘Targets’) into Cells and
Cell Blocks — folded into a useful
CLI & TUI to also make the lives of your
colleagues easier.
Through more intuition and less documentation, your team and community will finally find a canonical answer to the everlasting question: What can I do with this repository?
The Standard NixOS Story (in case you wondered)
Once you got fed up with divnix/digga
or a disorganized personal configuration,
please head straight over to divnix/hive
and join the chat, there. It’s work in progress.
But hey! It means: we can progress together!
Getting Started
# flake.nix
{
description = "Description for the project";
inputs = {
std.url = "github:divnix/std";
nixpkgs.follows = "std/nixpkgs";
};
outputs = { std, self, ...} @ inputs: std.growOn {
inherit inputs;
# 1. Each folder inside `cellsFrom` becomes a "Cell"
# Run for example: 'mkdir nix/mycell'
# 2. Each <block>.nix or <block>/default.nix within it becomes a "Cell Block"
# Run for example: '$EDITOR nix/mycell/packages.nix' - see example content below
cellsFrom = ./nix;
# 3. Only blocks with these names [here: "packages" & "devshells"] are picked up by Standard
# It's a bit like the output type system of your flake project (hint: CLI & TUI!!)
cellBlocks = with std.blockTypes; [
(installables "packages" {ci.build = true;})
(devshells "devshells" {ci.build = true;})
];
}
# 4. Run 'nix run github:divnix/std'
# 'growOn' ... Soil:
# - here, compat for the Nix CLI
# - but can use anything that produces flake outputs (e.g. flake-parts or flake-utils)
# 5. Run: nix run .
{
devShells = std.harvest self ["mycell" "devshells"];
packages = std.harvest self ["mycell" "packages"];
};
}
# nix/mycell/packages.nix
{inputs, cell}: {
inherit (inputs.nixpkgs) hello;
default = cell.packages.hello;
}
This Repository
This repository combines the above mentioned stack components into the ready-to-use Standard framework. It adds a curated collection of Block Types for DevOps use cases. It further dogfoods itself and implements utilities in its own Cells.
Dogfooding
Only renders in the Documentation.
{
growOn,
inputs,
blockTypes,
pick,
harvest,
}:
growOn {
inherit inputs;
cellsFrom = ./cells;
cellBlocks = [
## For downstream use
# std
(blockTypes.runnables "cli" {ci.build = true;})
(blockTypes.functions "devshellProfiles")
(blockTypes.functions "lib")
(blockTypes.functions "errors")
(blockTypes.nixago "nixago")
(blockTypes.installables "packages" {ci.build = true;})
# lib
(blockTypes.functions "dev")
(blockTypes.functions "ops")
# presets
(blockTypes.data "templates")
(blockTypes.nixago "nixago")
## For local use in the Standard repository
# _automation
(blockTypes.devshells "devshells" {ci.build = true;})
(blockTypes.nixago "nixago")
(blockTypes.containers "containers")
# (blockTypes.tasks "tasks") # TODO: implement properly
# _tests
(blockTypes.data "data")
(blockTypes.files "files")
];
}
# Soil ("compatibile with the entire world")
{
devShells = harvest inputs.self ["_automation" "devshells"];
packages = harvest inputs.self [["std" "cli"] ["std" "packages"]];
templates = pick inputs.self ["presets" "templates"];
}
That’s it. std.grow
is a “smart” importer of your nix
code and is designed to keep boilerplate at bay. In the so called “Soil” compatibility layer, you can do whatever your heart desires. For example put flake-utils
or flake-parts
patterns here. Or, as in the above example, just make your stuff play nicely with the Nix CLI.
TIP:
- Clone this repo
git clone https://github.com/divnix/std.git
- Install
direnv
& inside the repo, do:direnv allow
(first time takes a little longer)- Run the TUI by entering
std
(first time takes a little longer)What can I do with this repository?
Documentation
The Documentation is here.
And here is the Book, a very good walk-trough. Start here!
Video Series
Examples in the Wild
This GitHub search query holds a pretty good answer.
Why?
Contributions
Please enter the contribution environment:
direnv allow || nix develop -c "$SHELL
Licenses
What licenses are used? → ./.reuse/dep5
.
And the usual copies? → ./LICENSES
.
A walk in the park
This is an excellent tutorial series by Joshua Gilman in the form of The Standard Book.
It is ideal for people with prior Nix and Nix Flakes experience.
They are written in a way that feels like a walk in the park, hence the nickname.
They are also often used to dogfood some new std
functionality and document it alongside in a palatable (non-terse) writing style.
Enjoy!
Hello World
Standard features a special project structure
that brings some awesome innovation
to this often overlooked (but important) part of your project.
With the default Cell Blocks, an apps.nix
file tells Standard
that we are creating an Application.
flake.nix
is in charge
of explicitly defining
the inputs of your project.
Btw, you can can copy * the following files from here.
* don’t just clone the
std
repo: flakes in subfolders don’t work that way.
/tmp/play-with-std/hello-world/flake.nix
{
inputs.std.url = "github:divnix/std";
inputs.nixpkgs.url = "nixpkgs";
outputs = {std, ...} @ inputs:
std.grow {
inherit inputs;
cellsFrom = ./cells;
};
}
/tmp/play-with-std/hello-world/cells/hello/apps.nix
{
inputs,
cell,
}: {
default = inputs.nixpkgs.stdenv.mkDerivation rec {
pname = "hello";
version = "2.10";
src = inputs.nixpkgs.fetchurl {
url = "mirror://gnu/hello/${pname}-${version}.tar.gz";
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
};
};
}
$ cd /tmp/play-with-std/hello-world/
$ git init && git add . && git commit -m"nix flakes only can see files under version control"
# fetch `std`
$ nix shell github:divnix/std
$ std //hello/apps/default:run
Hello, world!
You see? from nothing to running your first application in just a few seconds ✨
Assumptions
This example consumes the following defaults or builtins:
Default cellBlocks
(blockTypes.functions "library")
(blockTypes.runnables "apps")
(blockTypes.installables "packages")
],
systems ? [
Default systems
# Tier 1
"x86_64-linux"
# Tier 2
"aarch64-linux"
"x86_64-darwin"
# Other platforms with sufficient support in stdenv which is not formally
# mandated by their platform tier.
"aarch64-darwin" # a lot of apple M1 already out there
],
debug ? false,
Hello Moon
A slightly more complete hello world tutorial.
This tutorial implements a very typical _automation
Cell and its Cell Blocks for a somewhat bigger project.
It also makes use of more advanced functions of std
.
Namely:
std.growOn
instead ofstd.grow
std.harvest
to provide compatibility layers of “soil”- non-default Cell Block definitions
- the input debug facility
The terms “Block Type”, “Cell”, “Cell Block”, “Target” and “Action” have special meaning within the context of std
.
With these clear definitions, we navigate and communicate the code structure much more easily.
In order to familiarize yourself with them, please have a quick glance at the glossary.
File Layout
Let’s start again with a flake:
./flake.nix
{
inputs.std.url = "github:divnix/std";
inputs.nixpkgs.url = "nixpkgs";
outputs = {std, ...} @ inputs:
/*
brings std attributes into scope
namely used here: `growOn`, `harvest` & `blockTypes`
*/
with std;
/*
grows a flake "from cells" on "soil"; see below...
*/
growOn {
/*
we always inherit inputs and expose a deSystemized version
via {inputs, cell} during import of Cell Blocks.
*/
inherit inputs;
/*
from where to "grow" cells?
*/
cellsFrom = ./nix;
/*
custom Cell Blocks (i.e. "typed outputs")
*/
cellBlocks = [
(blockTypes.devshells "devshells")
(blockTypes.nixago "nixago")
];
/*
This debug facility helps you to explore what attributes are available
for a given input untill you get more familiar with `std`.
*/
debug = ["inputs" "std"];
}
/*
Soil is an idiom to refer to compatibility layers that are recursively
merged onto the outputs of the `std.grow` function.
*/
# Soil ...
# 1) layer for compat with the nix CLI
{
devShells = harvest inputs.self ["_automation" "devshells"];
}
# 2) there can be various layers; `growOn` is a variadic function
{};
}
This time we specified cellsFrom = ./nix;
.
This is gentle so that our colleagues know immediately which files to either look or never look at depending on where they stand.
We also used std.growOn
instead of std.grow
so that we can add compatibility layers of “soil”.
Furthermore, we only defined two Cell Blocks: nixago
& devshells
. More on them follows…
./nix/_automation/*
Next, we define a _automation
cell.
Each project will have some amount of automation.
This can be repository automation, such as code generation.
Or it can be a CI/CD specification.
In here, we wire up two tools from the Nix ecosystem: numtide/devshell
& nix-community/nixago
.
Please refer to these links to get yourself a quick overview before continuing this tutorial, in case you don’t know them, yet.
A very short refresher:
- Nixago: Template & render repository (dot-)files with nix. Why nix?
- Devshell: Friendly & reproducible development shells — the original ™.
Some semantic background:
Both, Nixago & Devshell are Component Tools.
(Vertical) Component Tools are distinct from (Horizontal) Integration Tools — such as
std
— in that they provide a specific capability in a minimal linux style: “Do one thing and do it well.”Integration Tools however combine them into a polished user story and experience.
The Nix ecosystem is very rich in component tools, however only few integration tools exist at the time of writing.
./nix/_automation/devshells.nix
Let’s start with the cell.devshells
Cell Block and work our way backwards to the cell.nixago
Cell Block below.
More semantic background:
I could also reference them as
inputs.cells._automation.devshells
&inputs.cells._automation.nixago
.But, because we are sticking with the local Cell context, we don’t want to confuse the future code reader. Instead, we gently hint at the locality by just referring them via the
cell
context.
{
inputs,
cell,
}: let
/*
I usually just find it very handy to alias all things library onto `l`...
The distinction between `builtins` and `nixpkgs.lib` has little practical
relevance, in most scenarios.
*/
l = nixpkgs.lib // builtins;
/*
It is good practice to in-scope:
- inputs by *name*
- other Cells by their *Cell names*
- the local Cell Blocks by their *Block names*.
However, for `std`, we make an exeption and in-scope, despite being an
input, its primary Cell with the same name as well as the dev lib.
*/
inherit (inputs) nixpkgs;
inherit (inputs.std) std;
inherit (inputs.std.lib) dev;
inherit (cell) nixago;
in
# we use Standard's mkShell wrapper for its Nixago integration
l.mapAttrs (_: dev.mkShell) {
default = {...}: {
name = "My Devshell";
# This `nixago` option is a courtesy of the `std` horizontal
# integration between Devshell and Nixago
nixago = [
# off-the-shelve from `std`
(std.nixago.conform {configData = {inherit (inputs) cells;};})
std.nixago.lefthook
std.nixago.adrgen
# modified from the local Cell
nixago.treefmt
nixago.editorconfig
nixago.mdbook
];
# Devshell handily represents `commands` as part of
# its Message Of The Day (MOTD) or the built-in `menu` command.
commands = [
{
package = nixpkgs.reuse;
category = "legal";
/*
For display, reuse already has both a `pname` & `meta.description`.
Hence, we don't need to inline these - they are autodetected:
name = "reuse";
description = "Reuse is a tool to manage a project's LICENCES";
*/
}
];
# Always import the `std` default devshellProfile to also install
# the `std` CLI/TUI into your Devshell.
imports = [std.devshellProfiles.default];
};
}
The nixago = [];
option in this definition is a special integration provided by the Standard’s devshell
-wrapper (std.lib.mkShell
).
This is how std
delivers on its promise of being a (horizontal) integration tool that wraps (vertical) component tools into a polished user story and experience.
Because we made use of std.harvest
in the flake, you now can actually test out the devshell via the Nix CLI compat layer by just running nix develop -c "$SHELL"
in the directory of the flake.
For a more elegant method of entering a development shell read on the direnv section below.
./nix/_automation/nixago.nix
As we have seen above, the nixago
option in the cell.devshells
Cell Block references Targets from both std.nixago
and cell.nixago
.
While you can explore std.nixago
here, let’s now have a closer look at cell.nixago
:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
inherit (inputs.std) std;
/*
While these are strictly specializations of the available
Nixago Pebbles at `std.nixago.*`, it would be entirely
possible to define a completely new pebble inline
*/
in {
/*
treefmt: https://github.com/numtide/treefmt
*/
treefmt = std.nixago.treefmt {
# we use the configData attribute to modify the
# target data structure via a simple data overlay
# (`divnix/data-merge` / `std.dmerge`) mechanism.
configData.formatter.go = {
command = "gofmt";
options = ["-w"];
includes = ["*.go"];
};
# for the `std.lib.dev.mkShell` integration with nixago,
# we also hint which packages should be made available
# in the environment for this "Nixago Pebble"
packages = [nixpkgs.go];
};
/*
editorconfig: https://editorconfig.org/
*/
editorconfig = std.nixago.editorconfig {
configData = {
# the actual target data structure depends on the
# Nixago Pebble, and ultimately, on the tool to configure
"*.xcf" = {
charset = "unset";
end_of_line = "unset";
insert_final_newline = "unset";
trim_trailing_whitespace = "unset";
indent_style = "unset";
indent_size = "unset";
};
"{*.go,go.mod}" = {
indent_style = "tab";
indent_size = 4;
};
};
};
/*
mdbook: https://rust-lang.github.io/mdBook
*/
mdbook = std.nixago.mdbook {
configData = {
book.title = "The Standard Book";
};
};
}
In this Cell Block, we have been modifying some built-in convenience std.nixago.*
pebbles.
The way configData
is merged upon the existing pebble is via a simple left-hand-side/right-hand-site data-merge
(std.dmerge
).
Background on array merge strategies:
If you know how a plain data-merge (does not magically) deal with array merge semantics, you noticed: We didn’t have to annotate our right-hand-side arrays in this example because we where not actually amending or modifying any left-hand-site array type data structure.
Would we have done so, we would have had to annotate:
- either with
std.dmerge.append [/* ... */]
;- or with
std.dmerge.update [ idx ] [/* ... */]
.But lucky us (this time)!
Command Line Synthesis
With this configuration in place, you have a couple of options on the command line.
Note, that you can accessor any std
cli invocation also via the std
TUI by just typing std
.
Just in case you forgot exactly how to accessor one of these repository capabilities.
Debug Facility:
Since the debug facility is enabled, you will see some trace output while running these commands. To switch this off, just comment the
debug = [ /* ... */ ];
attribute in the flake.It looks something like this:
trace: inputs on x86_64-linux trace: { cells = {…}; nixpkgs = {…}; self = {…}; std = {…}; }
Invoke devshell via nix
nix develop -c "$SHELL"
By quirks of the Nix CLI, if you don’t specify -c "$SHELL"
, you’ll be thrown into an unfamiliar bare bash
interactive shell.
That’s not what you want.
Invoke the devshell via std
In this case, invoking $SHELL
correctly is taken care for you by the Block Type’s enter
Action.
# fetch `std`
$ nix shell github:divnix/std
$ std //_automation/devshells/default:enter
Since we have declared the devshell Cell Block as a blockTypes.devshells
, std
auments it’s Targets with the Block Type Actions.
See blockTypes.devshells
for more details on the available Actions and their implementation.
Thanks to the cell.devshells
’ nixago
option, entering the devshell will also automatically reconcile the repository files under Nixago’s management.
Explore a Nixago Pebble via std
You can also explore the nixago configuration via the Nixago Block Type’s explore
-Action.
# fetch `std`
$ nix shell github:divnix/std
$ std //_automation/nixago/treefmt:explore
See blockTypes.nixago
for more details on the available Actions and their implementation.
direnv
Manually entering the devshell is boring.
How about a daemon always does that automatically & efficiently when you cd
into a project directory?
Enter direnv
— the original (again; and even from the same author) 😊.
Before you continue, first install direnv according to it’s install instructions. It’s super simple & super useful ™ and you should do it right now if you haven’t yet.
Please learn how to enable direnv
in this project by following the direnv how-to.
In this case, you would adapt the relevant line to: use std nix //_automation/devshells:default
.
Now, you can simply cd
into that directory, and the devshells is being loaded.
The MOTD will be shown, too.
The first time, you need to teach the direnv
daemon to trust the .envrc
file via direnv allow
.
If you want to reload the devshell (e.g. to reconcile Nixago Pebbles), you can just run direnv reload
.
Because I use these commands so often, I’ve set: alias d="direnv"
in my shell’s RC file.
Growing Cells
Growing cells can be done via two variants:
std.grow { cellsFrom = "..."; /* ... */ }
std.growOn { cellsFrom = "..."; /* ... */ } # soil
std.growOn {} # soil
This eases talking and reasoning about a std
ized repository, that also needs
some sort of adapters to work together better with external frameworks.
Typically, you’d arrange those adapters in numbered layers of soil, just so that it’s easier to conceptually reference them when talking / chatting.
It’s a variadic function and takes an unlimited number of “soil layers”.
{
inputs.std.url = "github:divnix/std";
outputs = {std, ...} @ inputs:
std.growOn {
inherit inputs;
cellsFrom = ./cells;
}
# soil
() # first layer
() # second layer
() # ... nth layer
;
}
These layers get recursively merged onto the output of std.grow
.
Include Filter
It is very common that you want to filter your source code in order to avoid unnecessary rebuilds and increase your cache hits.
This is so common that std
includes a tool for this:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
inherit (inputs) std;
in {
backend = nixpkgs.mkYarnPackage {
name = "backend";
src = std.incl (inputs.self + /src/backend) [
(inputs.self + /src/backend/app.js)
(inputs.self + /src/backend/config/config.js)
/* ... */
];
};
}
Setup .envrc
Standard provides an extension to the stdlib
via direnv_lib.sh
.
The integrity hash below ensures it is downloaded only once and cached from there on.
#! /bin/sh
# SPDX-FileCopyrightText: 2022 David Arnold <[email protected]>
# SPDX-FileCopyrightText: 2022 Kevin Amado <[email protected]>
#
# SPDX-License-Identifier: Unlicense
source "$(
nix eval \
--no-update-lock-file \
--no-write-lock-file \
--no-warn-dirty \
--accept-flake-config \
.#__std.direnv_lib 2>/dev/null \
|| nix eval .#__std.direnv_lib # show the errors
)"
use std cells //_automation/devshells:default
NOTE: In the above code
use std
cells
//std/...
refers to the folder where Cells are grown from. If your folder is e.g.nix
, adapt touse std
nix
//...
and so forth.
It is used to automatically set up file watches on files that could modify the current devshell, discoverable through these or similar logs during loading:
direnv: loading https://raw.githubusercontent.com/divnix/std/...
direnv: using std cells //_automation/devshells:default
direnv: Watching: cells/_automation/devshells.nix
direnv: Watching: cells/_automation/devshells (recursively)
For reference, the above example loads the default
devshell from:
{
inputs,
cell,
}: let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
inherit (inputs.cells) std lib;
in
l.mapAttrs (_: lib.dev.mkShell) rec {
default = {...}: {
name = "Standard";
nixago = [
(std.nixago.conform {configData = {inherit (inputs) cells;};})
cell.nixago.treefmt
cell.nixago.editorconfig
cell.nixago.just
cell.nixago.githubsettings
std.nixago.lefthook
std.nixago.adrgen
];
commands =
[
{
package = nixpkgs.reuse;
category = "legal";
}
{
package = nixpkgs.delve;
category = "cli-dev";
name = "dlv";
}
{
package = nixpkgs.go;
category = "cli-dev";
}
{
package = nixpkgs.gotools;
category = "cli-dev";
}
{
package = nixpkgs.gopls;
category = "cli-dev";
}
]
++ l.optionals nixpkgs.stdenv.isLinux [
{
package = nixpkgs.golangci-lint;
category = "cli-dev";
}
];
imports = [std.devshellProfiles.default book];
};
book = {...}: {
nixago = [cell.nixago.mdbook];
};
checks = {...}: {
name = "checks";
imports = [std.devshellProfiles.default];
commands = [
{
name = "blocktype-data";
command = "cat $(std //_tests/data/example:write)";
}
{
name = "blocktype-devshells";
command = "std //_automation/devshell/default:enter -- echo OK";
}
{
name = "blocktype-runnables";
command = "std //std/cli/default:run -- std OK";
}
];
};
}
Why nix
?
A lot of people write a lot of confusing stuff about nix.
So here, we’ll try to break it down, instead.
nix
is “json
on steroids”
In configuration management, you have a choice: data vs. language.
On stackoverflow, you’ll be taught the “data” stance, because it’s simple.
And all of a sudden you hit reality. Outside of a “lab” environment, you suddenly need to manage a varying degree of complexity.
So you need configuration combinators, or in other words a full blown language to efficiently render your configurations.
There are a couple of options, that you’ll recognize if you’ve gotten serious about the configuration challenge, like:
And there is nix
, the language. In most aspects, it isn’t hugely distinct from the others,
but it has superpowers. Read on!
nix
’ superpowers
You know the concept of string interpolation.
Every time nix
interpolates an identifier, there is something that
you don’t immediately see: it keeps a so called “string context” right
at the site of interpolation. That string context holds a directed acyclic
graph of all the dependencies that are required to make that string.
“Well, it’s just a string; what on earth should I need to make a string?”, you may say.
There is a special category of strings, so called “Nix store paths”
(strings that start with /nix/store/...
). These store paths represent
build artifacts that are content addressed ahead-of-time through
the inputs of an otherwise pure build function, called derivation
.
When you finally reify (i.e. “build”) your string interpolation, then all these Nix store paths get build as well.
This might be a bit of a mind-boggling angle, but after a while, you may realize:
- Nix is a massive build pipeline that tracks all things to their source.
- In their capacity as pure build functions,
derviation
s build reproducibly. - Reproducible builds are the future of software supply chain security, among other things.
- You’ll start asking: “who the heck invented all that insecure nonsense of opaque binary registries? Shouldn’t have those smart people have known better?”
- And from this realization, there’s no coming back.
- And you’ll have joined the European Union, banks and blockchain companies who also realized: we need to fix our utterly broken and insecure build systems!
- By that time, you’ll have already assimilated the legendary Ken Thompson’s “Reflections on Trusting Trust”.
Why std?
Problem
Nix is marvel to ones and cruelty to others.
Much of this professional schism is due to two fundamental issues:
- Nix is a functional language without typing
- Therefore, Nix-enthusiast seem to freaking love writing the most elegant and novel boilerplate all over again the next day.
The amount of domain specific knowledge required to untangle those most elegant and novel boilerplate patterns prevent
the other side of the schism, very understandably, to see through the smoke the true beauty and benefits of nix
as a
build and configuration language.
Lack of typing adds to the problem by forcing nix
-practitioners to go out of their way (e.g. via divnix/yants
) to
add some internal boundaries and contracts to an ever morphing global context.
As a consequence, few actually do that. And contracts across internal code boundaries are either absent or rudimentary or — yet again — “elegant and novel”. Neither of which satisfactorily settles the issue.
Solution
std
doesn’t add language-level typing. But a well-balanced folder layout cut at 3 layers of conceptual
nesting provides the fundamentals for establishing internal boundaries.
Cell → Cell Block → Target → [Action]
Where …
- Cells group functionality.
- Cell Blocks type outputs and implement Actions.
- Targets name outputs.
Programmers are really good at pattern-abstraction when looking at two similar but slightly different things: Cells and Cell Blocks set the stage for code readability.
Cell Blocks only allow one possible interface: {inputs, cell}
:
cell
the local Cell, promoting separation of concerninputs
thedeSystemize
ed flake inputs — plus:inputs.self = self.sourceInfo;
reference source code innix
; filter withstd.incl
; don’t misuse the globalself
.inputs.cells
: the other cells by name; code that documents its boundaries.inputs.nixpkgs
: an instantiatednixpkgs
for the current system;
Now, we have organized nix
code. Still, nix
is not for everybody.
And for everybody else the std
TUI/CLI companion answers a single question to perfection:
The GitOps Question:
What can I actually do with this std
-ized repository?
The Standard Answer:
std
breaks down GitOps into a single UX-optimized TUI/CLI entrypoint.
Benefit
Not everybody is going to love nix
now.
But the ones, who know its secrets, now have an effective tool to more empathically spark the joy.
Or simply: 💔 → 🧙 → 🔧 → ✨→ 🏖️
The smallest common denominator, in any case:
Only ever install a single dependency (
nix
) and reach any repository target. Reproducibly.
Architecture Decision Record
An architecture decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.
The template has all the info.
Usage
To interact with this ADR, enter the devshell and interact though the adrgen
tool.
1. Adopt semi-conventional file locations
Date: 2022-03-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Repository navigation is among the first activities to build a mental model of any given repository.
The Nix Ecosystem has come up with some weak conventions: these are variations that are mainly informed by the nixpkgs
repository, itself.
Despite that, users find it difficult to quickly "wrap their head" around a new project.
This is often times a result of an organically grown file organization that has trouble keeping up with growing project semantics.
As a result, onboading onto a "new" nix project even within the same organizational context, sometimes can be a very frustrating and time-consuming activity.
Decision
What is the change that we're proposing and/or doing?
A semi-conventional folder structure shall be adopted.
That folder structure shall have an abstract organization concept.
At the same time, it shall leave the user maximum freedom of semantics and naming.
Hence, 3 levels of organization are adopted. These levels correspond to the abstract organizational concepts of:
- consistent collection of functionality ("what makes sense to group together?")
- repository output type ("what types of gitops artifacts are produced?")
- named outputs ("what are the actual outputs?")
Consequences
What becomes easier or more difficult to do because of this change?
With this design and despite complete freedom of concrete semantics, a prototypical mental model can be reused across different projects.
That same prototypical mental model also speeds up schaffolding of new content and code.
At the expense of nested folders, it may still be further expanded, if additional organization is required.
All the while that the primary meta-information about a project is properly communicated through these first three levels via the file system api, itself (think ls
/ rg
/ fd
).
On the other hand, this rigidity is sometimes overkill and users may resort to filler names such as "default
", because a given semantic only produces singletons.
This is acceptable, however, because this parallellity in addressing even these singleton values trades for very easy expansion or refactoring, as the meta-models of code organization already align.
2. Restrict the calling interface
Date: 2022-03-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
The Nix Ecosystem has optimized for contributor efficiency at the expense of local code readibility and local reasoning.
Over time, the callPackage
idiom was developed that destructures arbitrary attributes of an 80k upstream attributeset provided by nixpkgs
.
A complicating side condition is added, where overlays modify that original upstream packages set in arbitrary ways.
This is not a problem for people, who know nixpkgs by heart and it is not a problem for the author either.
It is a problem for the future code reader, Nix expert or less so, who needs to grasp the essence of "what's going on" under a productivity side condidion.
Local reasoning is a tried and tested strategy to help mitigate those issues.
In a variant of this problem, we observe only somewhat convergent, but still largely diverging styles of passing arguments in general across the repository context.
Decision
What is the change that we're proposing and/or doing?
Encourage local reasoning by always fully qualifing identifiers within the scope of a single file.
In order to do so, the entry level nix files of this framework have exactly one possible interface: {inputs, cell}
.
inputs
represent the global inputs, whereas cell
keeps reference to the local context.
A Cell is the first ordering priciple for "consitent collection of functionality".
Consequences
What becomes easier or more difficult to do because of this change?
This restricts up to the prescribed 3 layers of organization the notion of "how files can communicate with each other".
That inter-files-interface is the only global context to really grasp, and it is structurally aligned across all Standard projects.
By virtue of this meta model of a global context and interfile-communications, for a somewhat familiarized code reader the barriers to local reasoning are greatly reduced.
The two context references are well known (flake inputs & cell-local blocks) and easily discoverable.
For authors, this schema takes away any delay that might arise out of the consideration of how to best structure that inter-file communcation schema.
Out of experience, a significant and low value (and ad-hoc) design process can by leap-frogged via this guidance.
3. Hide system for mortals
Date: 2022-04-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
In the context of DevOps (Standard is a DevOps framework), cross compilation is a significatly lesser concern, than what it is for packagers.
The pervasive use of system
in the current Nix (and foremost Flakes) Ecosystem is an optimization (and in part education) choice for these packagers.
However, in the context of DevOps, while not being irrelevant, it accounts for a fair share of distraction potential.
This ultimately diminuishes code-readibility and reasoning; and consequentially adoption. Especially in those code paths, where system
is a secondary concern.
Decision
What is the change that we're proposing and/or doing?
De-systemize everything to the "current" system and effectively hiding the explict manipulation from plain sight in most cases.
An attribute set, that differentiates for systems on any given level of its tree, is deSystemized
.
This means that all child attributes of the "current" system are lifted onto the "system"-level as siblings to the system attributes.
That also means, if explicit reference to system
is necessary, it is still there among the siblings.
The "current" system is brought into scope automatically, however.
What "current" means, is an early selector ("select early and forget"), usually determined by the user's operating system.
Consequences
What becomes easier or more difficult to do because of this change?
The explicit handling of system
in foreing context, where system
is not a primary concern is largely eliminated.
This makes using this framework a little easier for everybody, including packaging experts.
Since nixpkgs
, itself, exposes nixpkgs.system
and packaging without nixpkgs
is hardly imaginably, power-users still enjoy easy access to the "current" system, in case it's needed.
4. Early select system for conceptual untangling
Date: 2022-04-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Building on the previous ADR, we saw why we hide system
from plain sight.
In that ADR, we mention "select early and forget" as a strategy to scope the current system consistently across the project.
The current best practices for flakes postulate system
as the second level selector of an output attribute.
For current flakes, type primes over system.
However, this design choice makes the lema "select early and forget" across multiple code-paths a pain to work with.
This handling is exacerbated by the distinction between "systemized" and "non-systemized" (e.g. lib
) output attributes.
In the overall set of optimization goals of this framework, this distinction is of extraordinarily poor value, more so, that function calls are memoized during a single evaluation, which renders the system selector comuptationally irrelevant where not used.
Decision
What is the change that we're proposing and/or doing?
- Move the
system
selector from the second level to the first level. - Apply the
system
selector regardless and without excheption.
Consequences
What becomes easier or more difficult to do because of this change?
The motto "select early and forget" makes various code-paths easier to reason about and maintain.
The Nix CLI completion won't respond gracefully to these changes. However, the Nix CLI is explicitly not a primary target of this framework. The reason for this is that the use cases for the Nix CLI are somewhat skewed towards the packager use case, but in any case are (currently) not purpose built for the DevOps use case.
A simple patch to the Nix binary, can mitigate this for people who's muscle memory prefers the Nix CLI regardless. If you've already got that level of muscle memory, its meandering scope is probably anyways not an issue for you anymore.
5. Nixpkgs is still special, but not too much
Date: 2022-05-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
In general, Standard wouldn't treat any intput as special.
However, no project that requires source distributions of one of the 80k+ packages available in nixpkgs
can practically do without it.
Now, nixpkgs
has this wired and counter-intuitive mouthful of legacyPackages
, which was originally intended to ring an alarm bell and, for the non-nix-historians, still does.
Also, not very many other package collections adopt this idiom which makes it pretty much a singularity of the Nix package collection (nixpkgs
).
Decision
What is the change that we're proposing and/or doing?
If inputs.nixpkgs
is provided, in-scope legacyPackages
onto inputs.nixpkgs
, directly.
Consequences
What becomes easier or more difficult to do because of this change?
Users of Standard access packages as nixpkgs.<package-name>
.
Users that want to interact with nixos, do so by loading nixos = import (inputs.nixpkgs + "/nixos");
or similar.
The close coupling of the Nix Package Collection and NixOS now is broken.
This suites well the DevOps use case, which is not primarily concerned with the unseparable union of the Nix Packages Collection and NixOS.
It rather presents a plethora of use cases that content with the Nix Package Collection, alone, and where NixOS would present as a distraction.
Now, this separation is more explicit.
As another consequence of not treating nixpkgs
(or even the packaging use case) special is that Standard does not implement primary support for overlays
.
6. Avoid fix-point logic, such as overlays
Date: 2022-05-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Fix point logic is marvelously magic and also very practical.
A lot of people love the concept of nixpkgs
's overlays
.
However, we've all been suckers in the early days, and fix point logic wasn't probably one of the concepts that we grasped intuitivly and right at the beginning of our Nix journey.
The concept of recursivity all in itself is already demanding to reason about, where the concept of recourse-until-not-more-possible is even more mind-boggling.
Fix points are also clear instances of overloading global context.
And global context is a double edged sword between high-productivity for that one who has a good mental model of it and nightmare for that one who has to resort to local reasoning.
Decision
What is the change that we're proposing and/or doing?
In the interest of balancing productivity (for the veteran) and ease-of-onboarding (for the novice), we do not implement a prime support for fix-point logic, such as overlays
at the framework level.
Consequences
What becomes easier or more difficult to do because of this change?
Users who depend on it, need to scope its use to a particular Cell Block.
For the Nix package collection, users can do, for example: nixpkgs.appendOverlays [ /* ... */ ]
.
There is a small penalty in evaluating nixpkgs
a second time, since every moving of the fix point retriggers a complete evalutation.
But since this decision is made in the interest of balancing enacting trade-offs, this appears to be cost-effective in accordance with the overall optimization goals of Standard.
This is an opinionated pattern.
It helps structure working together on microservices with
std
.
The 4 Layers of Packaging
The Problem
We have written an application and now we want to package and run it. For its supply chain security benefits, we have been advised to employ reproducible and source-based distribution mechanisms. We furthermore need an interoperability platform that is generic and versatile: a configuration “lingua franca”. Our peers who write another application in another language should share that same approach. Thereby, we avoid the negative external effects of DevOps silos on integrators and operators, alike. Short: we make adoption of our application as easy as possible for our consumers.
The Actors
Note, that each actor persona can be exercised by one and the same person or a group of persons. Although possible, and even frequently so, it doesn’t imply that these roles are necessarily taken by distinct individuals.
Developer
The Developer persona incrementally modifies the source code. At times, such modification are relevant at the interface to the Operator persona. One such example is when the app configuration is modified. Another one, when important runtime characteristics are amended.
Operator
The Operator persona brings the application to production. She typically engages in further wrapping code or other artifact creation. She also supervises and observes the running application across different environments.
Release Manager
The Release Manager persona cuts releases at discrete points in time. In doing so, she procures their artifacts for general (public) consumtion. Any release is tied to a sufficiently high level of assurance of an artifact’s desired properties. For that purpose, she works with the Developer, Operator & QA personas along these 4 layers of packaging.
QA
The QA persona establishes various levels of assurance of an artifact’s desired properties. Thereby, the observable artifacts can emanate from any layer of these 4 layers of packaging. She informs the Developer, Operator and Release Manager personas about any found assurance levels. She can do so through manual or automatic (CI) means.
The Layers
flowchart TD packaging([Packaging]) operable([Operable]) image([OCI-Image]) scheduler([Scheduler Chart]) packaging --> operable operable --> image image --> scheduler click packaging href "#packaging-layer" "Jump to the packaging layer section" click operable href "#operable-layer" "Jump to the operable layer section" click image href "#oci-image-layer" "Jump to the OCI image layer section" click scheduler href "#scheduler-chart-layer" "Jump to the scheduler chart layer section" `
There is one very important factoring & interoperability rule about these layers:
A domain concern of a higher layer must not bleed into previous layers.
Observing this very simple rule ensures long term interoperability and maintainability of the stack. For example, not presuming a particular scheduler in the operable gives contributors a chance to easily add another scheduler target without painful refactoring.
Future Work: depending on how you count, there may be actually a 5th layer: the operator layer. But we may cover this in a future version of this article in further detail. If you don’t want to wait, you may have a conceptual look at the Charmed Operator Framework and Charmehub.
Note, that it would be possible to further fold these interfaces and a Nix veteran might be inclined to do so. But doing so would defeat the purpose of exposing well defined layered interfaces alongside role-boundaries, subject matter concepts for ease of communication and collaboration; as well as external artifact consumers.
Packaging Layer
Cell Block: (blockType.installables "packages")
Location: **/packages.nix # or **/packages/
Actors:
- Build Expert Panel, Nix- & language-specific
- Release Manager
This Cell Block builds, unit-tests & packages the application via the appropriate Nix primitives. Each programming language has a different best practice approach to packaging. Therfore, a fair ammount of domain knowledge between Nix and the language’s build system is required.
The location of the actual build instructions are secondary. At minimum, though, for transparency’s and uniformity’s sake, they are still proxied via this Cell Block. So in the case that upstream already contains appropriate build instructions, the following indirection is perfectly valid (and necessary):
{ inputs, cell }: {
app = inputs.upstream.packages.some-app;
}
Build instructions themselves should encompass executing unit tests. Builds that fail unit tests should already be filtered out at this layer (i.e. “a build that fails unit tests is no build, at all”).
A Release Manager may decide to provide these artifacts to the general public on tagged releases.
In a hurry?
Fetching binary artifacts and incorporating them at this layer as a temporary work-around for non-production enviornments is acceptable.
Operable Layer
Cell Block: (blockType.runnables "operables")
Location: **/operables.nix # or **/operables/
Actors:
- Developer
- Operator
This Cell Block exclusively defines the runtime environment of the application via the operable script.
The role of this script — customarily written in bash
— serves as a concise and reified communication channel between Developers and Operators.
As such, Operators will find all the primary configuration options re-encoded at a glance and in a well-known location.
In the other direction, Developers will find all the magic ad-hoc wrapping that Operators had to engage in, in order to run the application on the target scheduler.
Through this communication channel, operators take reliably note of configuration drift, while Developers gain a valuable source of backlog to increase the operational robustness of the application.
Standard includes a specific library function that establishes an implementation-site interface for operables and their collaterals which significantly eases working on the following layers.
OCI-Image Layer
Cell Block: (blockType.containers "oci-images")
Location: **/oci-images.nix # or **/oci-images/
Actors:
- Operator
- Release Manager
This layered packaging pattern promotes source-based distribution in principle. Despite, in common operating scenarios, we require a security stop-gap separation. This ensures very fundamentally that nothing is accidentially built on the target (production) worker node, where it would cannibalize critical resources.
We chose OCI-Images as the binary distribution format. It not only fits that purpose through the OCI Distribution Specification, but also collaterally procures interoperability for 3rd parties: OCI images are the de-facto industry standard for deployment artifacts.
If the operables have been created via the above mentioned library function. Using the Standard OCI image library function, the creation of OCI images trivially reduces to:
{ inputs, cell }: let
inherit (inputs.std.lib) ops;
in {
image-hard = ops.mkStandardOCI {
name = "docker.io/my-image-hardened";
operable = cell.operables.app;
};
image = ops.mkStandardOCI {
name = "docker.io/my-image";
operable = cell.operables.app;
debug = true;
};
}
Alternatively, any of the avaible Nix-based OCI generation mini-frameworks can be used;
nlewo/nix2container
being the recommended one.
Hence, this mini-framework is internally used by the operables library function.
A Release Manager may decide to provide these artifacts to the general public on tagged releases.
In a hurry?
Fetching published images and incorporating them at this layer as a temporary work-around for non-production enviornments is acceptable.
Scheduler Chart Layer
Cell Block: (blockType.functions "<sched>Charts")
Location: **/<sched>Charts.nix # or **/<sched>Charts/
Actors:
- Operator
- Release Manager
The scheduler chart is not yet mainfest data. Rather, it is a function interface that commonly renders to such json-serializable manifest data. These mainfests are then ingested and interpreted by the scheduler.
A fair amount of scheduler domain knowledge and familiarity with its scheduling options is required, especially during creation.
These charts can then be processed further downstream (e.g. in Nix code) to specialize out the final manifests and environments.
Since these charts are the basis of various environments for development, staging and production, it is highly recommended to keep their function interface extremely mininmal and stable.
This avoids the risk of inadvertely modifying production manifests (e.g. via a human error in the base charts) based on a development or staging requirement.
In these cases, it is highly recommended to resort to data-oriented overlay mechanisms for ad-hoc modification.
A purpose-built tool to do so (called data-merge
) is already re-exported under std.dmerge
for convenience.
Those modifications should only propagate into a chart interface after stabilizing and after having successfully percolated through all existing environments first.
A Release Manager may decide to provide these artifacts to the general public on tagged releases. For example: in the transpiled form of a widely used scheduler-specific config sharing format, such as helm-charts.
Future Work: it might be a feasible task to extract a common base interface among different schedulers and thereby implement a base chart that we could simply specialize for each target schedulers (including
systemd
). But this may be subject of future research. Potentially, the above mentioned Charmed Operator Framework even obsoletes that need a priori and in practical terms.
Standard, and Nix and Rust, oh my!
This template uses Nix to create a sane development shell for Rust projects, Standard for keeping your Nix code well organized, Fenix for pulling the latest rust binaries via Nix, and Crane for building Rust projects in Nix incrementally, making quick iteration a breeze.
Rust Analyzer is also wired up properly for immediate use from a terminal based editor with language server support. Need one with stellar Nix and Rust support? Try Helix!
Bootstrap
# make a new empty project dir
mkdir my-project
cd my-project
# grab the template
nix flake init -t github:divnix/std#rust
# do some inititialization
git init && git add .
# enter the devshell
direnv allow || nix develop
# continue some inititialization
cargo init # pass --lib for library projects
cargo build # to generate Cargo.lock
git add . && git commit -m "init"
TUI/CLI
TUI/CLI:
# TUI
std
# CLI
std //<TAB>
std re-cache # refresh the CLI cache
std list # show a list of all targets
# Version
std -v
Help:
❯ std -h
std is the CLI / TUI companion for Standard.
- Invoke without any arguments to start the TUI.
- Invoke with a target spec and action to run a known target's action directly.
Usage:
std //[cell]/[block]/[target]:[action] [args...]
std [command]
Available Commands:
list List available targets.
re-cache Refresh the CLI cache.
Flags:
-h, --help help for std
-v, --version version for std
Use "std [command] --help" for more information about a command.
Conventions in std
In principle, we all want to be able to read code with local reasoning.
However, these few conventions are pure quality of life and help us to keep our nix code organized.
Nix File Locations
Nix files are imported from either of these two locations, if present, in this order of precedence:
${cellsFrom}/${cell}/${block}.nix
${cellsFrom}/${cell}/${block}/default.nix
Readme File Locations
Readme files are picked up by the TUI in the following places:
${cellsFrom}/${cell}/Readme.md
${cellsFrom}/${cell}/${block}/Readme.md
${cellsFrom}/${cell}/${block}/${target}.md
Cell Block File Arguments
Each Cell Block is a function and expects the following standardized interface for interoperability:
{ inputs, cell }: {}
The inputs
argument
The inputs
argument holds all the de-systemized flake inputs plus a few special inputs:
{
inputs = {
self = {}; # sourceInfo of the current repository
nixpkgs = {}; # an _instantiated_ nixpkgs
cells = {}; # the other cells in this repo
};
}
The cell
argument
The cell
argument holds all the different Cell Block targets of the current cell.
This is the main mechanism by which code organization and separation of concern is enabled.
The deSytemize
d inputs
All inputs are scoped for the current system, that is derived from the systems
input list to std.grow
.
That means contrary to the usual nix-UX, in most cases, you don’t need to worry about system
.
The current system will be “lifted up” one level, while still providing full access to all systems
for
cross-compilation scenarios.
# inputs.a.packages.${system}
{
inputs.a.packages.pkg1 = {};
inputs.a.packages.pkg2 = {};
/* ... */
inputs.a.packages.${system}.pkgs1 = {};
inputs.a.packages.${system}.pkgs2 = {};
/* ... */
}
Top-level system
-scoping of outputs
Contrary to the upstream flake schema, all outputs are system
spaced at the top-level.
This allows us to uniformly select on the current system and forget about it for most
of the time.
Sometimes nix
evaluations don’t strictly depend on a particular system
, and scoping
them seems counter-intuitive. But due to the fact that function calls are memoized, there
is never a penalty in actually scoping them. So for the sake of uniformity, we scope them
anyways.
The outputs therefore abide by the following “schema”:
{
${system}.${cell}.${block}.${target} = {};
}
Deprecations
{inputs}: time: body: let
l = inputs.nixpkgs.lib // builtins;
pad = l.concatStringsSep "" (l.genList (_: " ") (20 - (l.stringLength time)));
in
l.warn ''
===============================================
!!! 🔥️ STANDARD DEPRECATION WARNING 🔥️ !!!
-----------------------------------------------
!!! Action required until scheduled removal !!!
!!! Scheduled Removal: ${pad}${time} !!!
-----------------------------------------------
On schedule, deprecated facilities will be
removed from Standard without further warning.
-----------------------------------------------
${body}
===============================================
⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳
''
Please observe the following deprecations and their deprecation schedule:
inputs: let
removeBy = import ./cells/std/errors/removeBy.nix {inherit inputs;};
in {
warnRemovedDevshellOptionAdr = removeBy "December 2022" ''
The std.adr.enable option has been removed from the std shell.
Please look for something like "adr.enable = false" and drop it.
'';
warnRemovedDevshellOptionDocs = removeBy "December 2022" ''
The std.docs.enable option has been removed from the std shell.
Please look for something like "docs.enable = false" and drop it.
'';
warnMkMakes = removeBy "December 2022" ''
std.lib.fromMakesWith has been refactored to std.lib.mkMakes.
It furthermore doesn't take 'inputs' as its first argument
anymore.
'';
warnMkMicrovm = removeBy "December 2022" ''
std.lib.fromMicrovmWith has been refactored to std.lib.mkMicrovm.
It furthermore doesn't take 'inputs' as its first argument
anymore.
'';
warnNewLibCell = removeBy "December 2022" ''
'std.std.lib' has been distributed into its own cell 'std.lib'
Please access functions via their new location:
... moved to 'std.lib.ops':
- 'std.std.lib.mkMicrovm' -> 'std.lib.ops.mkMicrovm'
- 'std.std.lib.writeShellEntrypoint' -> 'std.lib.ops.writeShellEntrypoint'
... moved to 'std.lib.dev':
- 'std.std.lib.mkShell' -> 'std.lib.dev.mkShell'
- 'std.std.lib.mkNixago' -> 'std.lib.dev.mkNixago'
- 'std.std.lib.mkMakes' -> 'std.lib.dev.mkMakes'
'';
warnWriteShellEntrypoint = removeBy "December 2022" ''
'std.lib.ops.writeShellEntrypoint' is deprecated.
Instead, use 'std.lib.ops.mkOperable' together
with 'std.lib.ops.mkStandardOCI'.
Please consult its documentation.
'';
warnOldActionInterface = actions:
removeBy "March 2023" ''
The action interface has chaged from:
{ system, flake, fragment, fragmentRelPath }
To:
{ system, target, fragment, fragmentRelPath }
Please adjust the following actions:
${builtins.concatStringsSep "\n" (map (a: " - ${a.name}: ${(builtins.unsafeGetAttrPos "name" a).file}") actions)}
'';
}
Builtin Block Types
A few Block Types are packaged with std
.
In practical terms, Block Types distinguish themselves through the actions they provide to a particular Cell Block.
It is entirely possible to define custom Block Types with custom Actions according to the needs of your project.
Data
{
nixpkgs,
mkCommand,
}: let
l = nixpkgs.lib // builtins;
/*
Use the Data Blocktype for json serializable data.
Available actions:
- write
- explore
For all actions is true:
Nix-proper 'stringContext'-carried dependency will be realized
to the store, if present.
*/
data = name: {
inherit name;
type = "data";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: let
inherit (nixpkgs.legacyPackages.${system}) pkgs;
json = pkgs.writeTextFile {
name = "data.json";
text = builtins.toJSON target;
};
jq = ["${pkgs.jq}/bin/jq" "-r" "'.'" "${json}"];
fx = ["|" "xargs" "cat" "|" "${pkgs.fx}/bin/fx"];
in [
(mkCommand system {
name = "write";
description = "write to file";
command = "echo ${json}";
})
(mkCommand system {
name = "explore";
description = "interactively explore";
command = l.concatStringsSep "\t" (jq ++ fx);
})
];
};
in
data
Functions
{
nixpkgs,
mkCommand,
}: let
l = nixpkgs.lib // builtins;
/*
Use the Functions Blocktype for reusable nix functions that you would
call elswhere in the code.
Also use this for all types of modules and profiles, since they are
implemented as functions.
Consequently, there are no actions available for functions.
*/
functions = name: {
inherit name;
type = "functions";
};
in
functions
Runnables
{
nixpkgs,
mkCommand,
sharedActions,
}: let
lib = nixpkgs.lib // builtins;
/*
Use the Runnables Blocktype for targets that you want to
make accessible with a 'run' action on the TUI.
*/
runnables = name: {
__functor = import ./__functor.nix;
inherit name;
type = "runnables";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: [
(sharedActions.build system target)
(sharedActions.run system target)
];
};
in
runnables
Installables
{
nixpkgs,
mkCommand,
sharedActions,
}: let
l = nixpkgs.lib // builtins;
/*
Use the Installables Blocktype for targets that you want to
make availabe for installation into the user's nix profile.
Available actions:
- install
- upgrade
- remove
- build
- bundle
- bundleImage
- bundleAppImage
*/
installables = name: {
__functor = import ./__functor.nix;
inherit name;
type = "installables";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: [
(sharedActions.build system target)
# profile commands require a flake ref
(mkCommand system {
name = "install";
description = "install this target";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix profile install $PRJ_ROOT#${fragment}
'';
})
(mkCommand system {
name = "upgrade";
description = "upgrade this target";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix profile upgrade $PRJ_ROOT#${fragment}
'';
})
(mkCommand system {
name = "remove";
description = "remove this target";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix profile remove $PRJ_ROOT#${fragment}
'';
})
# TODO: use target. `nix bundle` requires a flake ref, but we may be able to use nix-bundle instead as a workaround
(mkCommand system {
name = "bundle";
description = "bundle this target";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix bundle --bundler github:Ninlives/relocatable.nix --refresh $PRJ_ROOT#${fragment}
'';
})
(mkCommand system {
name = "bundleImage";
description = "bundle this target to image";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix bundle --bundler github:NixOS/bundlers#toDockerImage --refresh $PRJ_ROOT#${fragment}
'';
})
(mkCommand system {
name = "bundleAppImage";
description = "bundle this target to AppImage";
command = ''
# ${target}
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
nix bundle --bundler github:ralismark/nix-appimage --refresh $PRJ_ROOT#${fragment}
'';
})
];
};
in
installables
Microvms
{
nixpkgs,
mkCommand,
}: let
lib = nixpkgs.lib // builtins;
/*
Use the Microvms Blocktype for Microvm.nix - https://github.com/astro/microvm.nix
Available actions:
- run
- console
- microvm
*/
microvms = name: {
inherit name;
type = "microvms";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: [
(mkCommand system {
name = "run";
description = "run the microvm";
command = ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-run
'';
})
(mkCommand system {
name = "console";
description = "enter the microvm console";
command = ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-console
'';
})
(mkCommand system {
name = "microvm";
description = "pass any command to microvm";
command = ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-"[email protected]"
'';
})
];
};
in
microvms
Devshells
{
nixpkgs,
mkCommand,
sharedActions,
}: let
l = nixpkgs.lib // builtins;
mkDevelopDrv = import ../devshell-drv.nix;
/*
Use the Devshells Blocktype for devShells.
Available actions:
- build
- enter
*/
devshells = name: {
__functor = import ./__functor.nix;
inherit name;
type = "devshells";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: let
developDrv = mkDevelopDrv target;
in [
(sharedActions.build system target)
(mkCommand system {
name = "enter";
description = "enter this devshell";
command = ''
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
if test -z "$PRJ_DATA_DIR"; then
echo "PRJ_DATA_DIR is not set. Action aborting."
exit 1
fi
profile_path="$PRJ_DATA_DIR/${fragmentRelPath}"
mkdir -p "$profile_path"
# ${developDrv}
nix_args=(
"${builtins.unsafeDiscardStringContext developDrv.drvPath}"
"--no-update-lock-file"
"--no-write-lock-file"
"--no-warn-dirty"
"--accept-flake-config"
"--no-link"
"--build-poll-interval" "0"
"--builders-use-substitutes"
)
nix build "''${nix_args[@]}" --profile "$profile_path/shell-profile"
_SHELL="$SHELL"
eval "$(nix print-dev-env ${developDrv})"
SHELL="$_SHELL"
if ! [[ -v STD_DIRENV ]]; then
if declare -F __devshell-motd &>/dev/null; then
__devshell-motd
fi
exec $SHELL -i
fi
'';
})
];
};
in
devshells
Containers
{
nixpkgs,
mkCommand,
sharedActions,
}: let
l = nixpkgs.lib // builtins;
/*
Use the Containers Blocktype for OCI-images built with nix2container.
Available actions:
- print-image
- copy-to-registry
- copy-to-podman
- copy-to-docker
*/
containers = name: {
__functor = import ./__functor.nix;
inherit name;
type = "containers";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: [
(sharedActions.build system target)
(mkCommand system {
name = "print-image";
description = "print out the image name & tag";
command = ''
echo
echo "${target.imageName}:${target.imageTag}"
'';
})
(mkCommand system {
name = "publish";
description = "copy the image to its remote registry";
command = let
image = target.imageRefUnsafe or "${target.imageName}:${target.imageTag}";
in ''
# docker://${builtins.unsafeDiscardStringContext image}
${target.copyToRegistry}/bin/copy-to-registry
'';
proviso =
# bash
''
function proviso() {
local -n input=$1
local -n output=$2
local -a images
local delim="$RANDOM"
function get_images () {
command nix show-derivation [email protected] \
| command jq -r '.[].env.text' \
| command grep -o 'docker://\S*'
}
drvs="$(command jq -r '.actionDrv | select(. != "null")' <<< "''${input[@]}")"
mapfile -t images < <(get_images $drvs)
command cat << "$delim" > /tmp/check.sh
#!/usr/bin/env bash
if ! command skopeo inspect --insecure-policy "$1" &>/dev/null; then
echo "$1" >> /tmp/no_exist
fi
$delim
chmod +x /tmp/check.sh
rm -f /tmp/no_exist
echo "''${images[@]}" \
| command xargs -n 1 -P 0 /tmp/check.sh
declare -a filtered
for i in "''${!images[@]}"; do
if command grep "''${images[$i]}" /tmp/no_exist &>/dev/null; then
filtered+=("''${input[$i]}")
fi
done
output=$(command jq -cs '. += $p' --argjson p "$output" <<< "''${filtered[@]}")
}
'';
})
(mkCommand system {
name = "copy-to-registry";
description = "copy the image to its remote registry";
command = ''
${target.copyToRegistry}/bin/copy-to-registry
'';
})
(mkCommand system {
name = "copy-to-docker";
description = "copy the image to the local docker registry";
command = ''
${target.copyToDockerDaemon}/bin/copy-to-docker-daemon
'';
})
(mkCommand system {
name = "copy-to-podman";
description = "copy the image to the local podman registry";
command = ''
${target.copyToPodman}/bin/copy-to-podman
'';
})
];
};
in
containers
Nixago
{
nixpkgs,
mkCommand,
}: let
l = nixpkgs.lib // builtins;
/*
Use the Nixago Blocktype for nixago pebbles.
Use Nixago pebbles to ensure files are present
or symlinked into your repository. You may typically
use this for repo dotfiles.
For more information, see: https://github.com/nix-community/nixago.
Available actions:
- ensure
- explore
*/
nixago = name: {
inherit name;
type = "nixago";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: [
(mkCommand system {
name = "populate";
description = "populate this nixago file into the repo";
command = ''
${target.install}/bin/nixago_shell_hook
'';
})
(mkCommand system {
name = "explore";
description = "interactively explore the nixago file";
command = ''
${nixpkgs.legacyPackages.${system}.bat}/bin/bat "${target.configFile}"
'';
})
];
};
in
nixago
Arion
{
nixpkgs,
mkCommand,
}: let
l = nixpkgs.lib // builtins;
/*
Use the arion for arionCompose Jobs - https://docs.hercules-ci.com/arion/
Available actions:
- up
- ps
- stop
- rm
- config
- arion
*/
arion = name: {
inherit name;
type = "arion";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: let
cmd = "arion --prebuilt-file ${target.config.out.dockerComposeYaml}";
in [
(mkCommand system {
name = "up";
description = "arion up";
command = ''
${cmd} up "[email protected]"
'';
})
(mkCommand system {
name = "ps";
description = "exec this arion task to ps";
command = ''
${cmd} ps "[email protected]"
'';
})
(mkCommand system {
name = "stop";
description = "arion stop";
command = ''
${cmd} stop "[email protected]"
'';
})
(mkCommand system {
name = "rm";
description = "arion rm";
command = ''
${cmd} rm "[email protected]"
'';
})
(mkCommand system {
name = "config";
description = "check the docker-compose yaml file";
command = ''
${cmd} config "[email protected]"
'';
})
(mkCommand system {
name = "arion";
description = "pass any command to arion";
command = ''
${cmd} "[email protected]"
'';
})
];
};
in
arion
Nomad Job Manfiests
{
nixpkgs,
mkCommand,
}: let
l = nixpkgs.lib // builtins;
/*
Use the `nomadJobsManifest` Blocktype for rendering job descriptions
for the Nomad Cluster scheduler. Each named attribtute-set under the
block contains a valid Nomad job description, written in Nix.
Available actions:
- render
- deploy
- explore
*/
nomadJobManifests = name: {
__functor = import ./__functor.nix;
inherit name;
type = "nomadJobManifests";
actions = {
system,
fragment,
fragmentRelPath,
target,
}: let
fx = "${nixpkgs.legacyPackages.${system}.fx}/bin";
nomad = "${nixpkgs.legacyPackages.${system}.nomad}/bin";
jq = "${nixpkgs.legacyPackages.${system}.jq}/bin";
job = baseNameOf fragmentRelPath;
nixExpr = ''
x: let
job = builtins.mapAttrs (_: v: v // {meta = v.meta or {} // {rev = "\"$(git rev-parse --short HEAD)\"";};}) x.job;
in
builtins.toFile \"${job}.json\" (builtins.unsafeDiscardStringContext (builtins.toJSON {inherit job;}))
'';
layout = ''
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
if test -z "$PRJ_DATA_DIR"; then
echo "PRJ_DATA_DIR is not set. Action aborting."
exit 1
fi
job_path="$PRJ_DATA_DIR/${dirOf fragmentRelPath}/${job}.json"
# use Nomad bin in path if it exists, and only fallback on nixpkgs if it doesn't
PATH="$PATH:${nomad}"
'';
render = ''
if test -z "$PRJ_ROOT"; then
echo "PRJ_ROOT is not set. Action aborting."
exit 1
fi
echo "Rendering to $job_path..."
# use `PRJ_ROOT` to capture dirty state
if ! out="$(nix eval --no-allow-dirty --raw $PRJ_ROOT\#${fragment} --apply "${nixExpr}")"; then
>&2 echo "error: Will not render jobs from a dirty tree, otherwise we cannot keep good track of deployment history."
exit 1
fi
nix build "$out" --out-link "$job_path" 2>/dev/null
if status=$(nomad validate "$job_path"); then
echo "$status for $job_path"
fi
'';
in [
/*
The `render` action will take this Nix job descrition, convert it to JSON,
inject the git revision validate the manifest, after which it can be run or
planned with the Nomad cli or the `deploy` action.
*/
(mkCommand system {
name = "render";
description = "build the JSON job description";
command =
# bash
''
set -e
${layout}
${render}
'';
})
(mkCommand system {
name = "deploy";
description = "Deploy the job to Nomad";
command =
# bash
''
set -e
${layout}
PATH=$PATH:${jq}
if ! [[ -h "$job_path" ]] \
|| [[ "$(jq -r '.job[].meta.rev' "$job_path")" != "$(git rev-parse --short HEAD)" ]]
then ${render}
fi
if ! plan_results=$(nomad plan -force-color "$job_path"); then
echo "$plan_results"
cmd="$(echo "$plan_results" | grep 'nomad job run -check-index')"
# prompt user interactiely except in CI
if ! [[ -v CI ]]; then
read -rp "Deploy this job? (y/N)" deploy
case "$deploy" in
[Yy])
eval "$cmd"
;;
*)
echo "Exiting without deploying"
;;
esac
else
eval "$cmd"
fi
else
echo "Job hasn't changed since last deployment, nothing to deploy"
fi
'';
})
(mkCommand system {
name = "explore";
description = "interactively explore the Job defintion";
command =
# bash
''
set -e
${layout}
if ! [[ -h "$job_path" ]]; then
${render}
fi
PATH=$PATH:${fx}
fx "$job_path"
'';
})
];
};
in
nomadJobManifests
Pkgs
{...}: let
/*
Use the Pkgs Blocktype if you need to construct your custom
variant of nixpkgs with overlays.
Targets will be excluded from the CLI / TUI and thus not
slow them down.
*/
pkgs = name: {
inherit name;
type = "pkgs";
cli = false; # its special power
};
in
pkgs
The std
Cell
… is the only cell in divnix/std
and provides only very limited functionality.
- It contains the TUI, in
./cli
. - It contains a
devshellProfile
in./devshellProfiles
. - It contains a growing number of second level library functions in
./lib
. - Packages that are used in std devshells are proxied in
./packages
.
That’s it.
The std
TUI / CLI
Usage
- Enter a
std
ized repository. - Enter it’s devshell (which must include
//std/devshellProfiles:default
) - Run
std
.
It will show you around interactively and lead you very quickly to what you’re looking for.
It’s self-documented on it’s legend.
std
’s devshellProfiles
This Cell Block only exports a single default
devshellProfile.
Any std
ized repository should include this into its numtide/devshell
in order to provide any visitor with the fully pre-configured std
TUI.
It also wires & instantiates a decent ADR tool. Or were you planning to hack away without some minimal conscious effort of decision making and recording? 😅
Usage Example
# ./nix/_automation/devshells.nix
{
inputs,
cell,
}: let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
inherit (inputs.std) std;
in
l.mapAttrs (_: std.lib.mkShell) {
# `default` is a special target in newer nix versions
# see: harvesting below
default = {
name = "My Devshell";
# make `std` available in the numtide/devshell
imports = [ std.devshellProfiles.default ];
};
}
# ./flake.nix
{
inputs.std.url = "github:divnix/std";
outputs = inputs:
inputs.std.growOn {
inherit inputs;
cellsFrom = ./nix;
cellBlocks = [
/* ... */
(inputs.std.blockTypes.devshells "devshells")
];
}
# soil for compatiblity ...
{
# ... with `nix develop` - `default` is a special target for `nix develop`
devShells = inputs.std.harvest inputs.self ["automation" "devshells"];
};
}
The std
Nixago Pebbles
Standard comes packages with some Nixago Pebbles for easy downstream re-use.
Some Pebbles may have a special integration for std
.
For example, the conform
Pebble can undestand inputs.cells
and add each Cell as a so called “scope” to its
Conventional Commit configuration.
If you’re rather looking for Nixago Presets (i.e. pebbles that already have an opinionated default), please refer to the nixago presets, instead.
adrgen
A great tool to manage Architecture Decision Records.
Definition:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
in {
configData = {};
output = "adrgen.config.yml";
format = "yaml";
commands = [{package = cell.packages.adrgen;}];
}
conform
Conform your code to policies, e.g. in a pre-commit hook.
This version is wrapped, it can auto-enhance the conventional
commit scopes with your cells
as follows:
{ inputs, cell}: let
inherit (inputs.std) std;
in {
default = std.lib.mkShell {
/* ... */
nixago = [
(std.nixago.conform {configData = {inherit (inputs) cells;};})
];
};
}
Definition:
{
inputs,
cell,
}: let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
in {
configData = {};
format = "yaml";
output = ".conform.yaml";
packages = [nixpkgs.conform];
apply = d: {
policies =
[]
++ (l.optional (d ? commit) {
type = "commit";
spec =
d.commit
// l.optionalAttrs (d ? cells) {
conventional =
d.commit.conventional
// {
scopes =
d.commit.conventional.scopes
++ (l.subtractLists l.systems.doubles.all (l.attrNames d.cells));
};
};
})
++ (l.optional (d ? license) {
type = "license";
spec = d.license;
});
};
}
editorconfig
Most editors understand this file and autoconfigure themselves accordingly.
Definition:
{
inputs,
cell,
}: let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
in {
configData = {};
output = ".editorconfig";
engine = request: let
inherit (request) configData output;
name = l.baseNameOf output;
value = {
globalSection = {root = configData.root or true;};
sections = l.removeAttrs configData ["root"];
};
in
nixpkgs.writeText name (l.generators.toINIWithGlobalSection {} value);
packages = [nixpkgs.editorconfig-checker];
}
just
Just is a general purpose command runner with syntax inspired by make
.
Tasks are configured via an attribute set where the name is the name of the task
(i.e. just <task>
) and the value is the task definition (see below for an
example). The generated Justfile
should be committed to allow non-Nix users to
on-ramp without needing access to Nix.
Task dependencies (i.e. treefmt
below) should be included in packages
and
will automatically be picked up in the devshell.
{ inputs, cell }:
let
inherit (inputs.std) nixpkgs std;
in
{
default = std.lib.mkShell {
/* ... */
nixago = [
(std.nixago.just {
packages = [ nixpkgs.treefmt ];
configData = {
tasks = {
fmt = {
description = "Formats all changed source files";
content = ''
treefmt $(git diff --name-only --cached)
'';
};
};
};
})
];
};
}
It’s also possible to override the interpreter for a task:
{
# ...
hello = {
description = "Prints hello world";
interpreter = nixpkgs.python3;
content = ''
print("Hello, world!")
'';
};
}
# ...
Definition:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
l = nixpkgs.lib // builtins;
in {
configData = {};
apply = d: let
# Transforms interpreter attribute if present
# nixpkgs.pkgname -> nixpkgs.pkgname + '/bin/<name>'
getExe = x: "${l.getBin x}/bin/${x.meta.mainProgram or (l.getName x)}";
final =
d
// {
tasks =
l.mapAttrs
(n: v:
v // l.optionalAttrs (v ? interpreter) {interpreter = getExe v.interpreter;})
d.tasks;
};
in {
data = final; # CUE expects structure to be wrapped with "data"
};
format = "text";
output = "Justfile";
packages = [nixpkgs.just];
hook = {
mode = "copy";
};
engine = inputs.nixago.engines.cue {
files = [./just.cue];
flags = {
expression = "rendered";
out = "text";
};
postHook = ''
${l.getExe nixpkgs.just} --unstable --fmt -f $out
'';
};
}
lefthook
A fast (parallel execution) and elegant git hook manager.
Definition:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
l = nixpkgs.lib // builtins;
in {
configData = {};
format = "yaml";
output = "lefthook.yml";
packages = [nixpkgs.lefthook];
hook.extra = d: let
# Add an extra hook for adding required stages whenever the file changes
skip_attrs = [
"colors"
"extends"
"skip_output"
"source_dir"
"source_dir_local"
];
stages = l.attrNames (l.removeAttrs d skip_attrs);
stagesStr = l.concatStringsSep " " stages;
in ''
# Install configured hooks
for stage in ${stagesStr}; do
${l.getExe nixpkgs.lefthook} add -f "$stage"
done
'';
}
mdbook
Write clean docs for humans.
This version comes preset with this gem to make any
Solution Architect extra happy: mdbook-kroki-preprocessor
Definition:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
in {
configData = {};
output = "book.toml";
format = "toml";
hook.extra = d: let
sentinel = "nixago-auto-created: mdbook-build-folder";
file = ".gitignore";
str = ''
# ${sentinel}
${d.build.build-dir or "book"}/**
'';
in ''
# Configure gitignore
create() {
echo -n "${str}" > "${file}"
}
append() {
echo -en "\n${str}" >> "${file}"
}
if ! test -f "${file}"; then
create
elif ! grep -qF "${sentinel}" "${file}"; then
append
fi
'';
commands = [{package = nixpkgs.mdbook;}];
}
treefmt
A code formatter to fromat the entire code tree extremly fast (in parallel and with a smart cache).
Definition:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
in {
configData = {};
output = "treefmt.toml";
format = "toml";
commands = [{package = nixpkgs.treefmt;}];
}
githubsettings
Syncs repository settings defined in .github/settings.yml
to GitHub, enabling Pull Requests for repository settings.
In order to use this, you also need to install Github Settings App. Please see the App’s Homepage for the configuration schema.
Definition:
{
inputs,
cell,
}: {
configData = {};
output = ".github/settings.yml";
format = "yaml";
hook.mode = "copy"; # let the Github Settings action pick it up outside of devshell
}
Standard Error Message Functions
This Cell Block comprises several error message functions that can be used in different situations.
removeBy
{inputs}: time: body: let
l = inputs.nixpkgs.lib // builtins;
pad = l.concatStringsSep "" (l.genList (_: " ") (20 - (l.stringLength time)));
in
l.warn ''
===============================================
!!! 🔥️ STANDARD DEPRECATION WARNING 🔥️ !!!
-----------------------------------------------
!!! Action required until scheduled removal !!!
!!! Scheduled Removal: ${pad}${time} !!!
-----------------------------------------------
On schedule, deprecated facilities will be
removed from Standard without further warning.
-----------------------------------------------
${body}
===============================================
⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳ ⏳
''
requireInput
{inputs}: input: url: target: let
l = inputs.nixpkgs.lib // builtins;
# other than `divnix/blank`
condition = inputs.${input}.sourceInfo.narHash != "sha256-O8/MWsPBGhhyPoPLHZAuoZiiHo9q6FLlEeIDEXuj6T4=";
trace = l.traceSeqN 1 inputs;
in
assert l.assertMsg condition (trace ''
===============================================
!!! 🚜️ STANDARD INPUT OVERLOADING 🚜️ !!!
-----------------------------------------------
In order to be able to use this target, an
extra input must be overloaded onto Standard
-----------------------------------------------
Target: ${target}
Extra Input: ${input}
Url: ${url}
-----------------------------------------------
To fix this, add this to your './flake.nix':
inputs.std.inputs.${input}.url =
"${url}";
For reference, see current inputs to Standard
in the above trace.
===============================================
🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥 🔥
''); inputs
The Standard Library
This library intends to cover the Software Delivery Life Cycle in the Standard way.
Each Cell Block covers a specific SDLC topic.
The Dev Library
This library covers development aspects of the SDLC.
mkMakes
provides an interface to makes
tasks
This is an integration for fluidattacks/makes
.
A version that has this patch is a prerequisite.
Usage example
{
inputs,
cell,
}: let
inherit (inputs.std.lib) dev;
in {
task = ops.mkMakes ./path/to/make/task//main.nix {};
}
Some refactoring of the tasks may be necessary. Let the error messages be your friend.
mkShell
This is a transparent convenience proxy for numtide/devshell
’s mkShell
function.
It is enriched with a tight integration for std
Nixago pebbles:
{ inputs, cell}: {
default = inputs.std.lib.dev.mkShell {
/* ... */
nixago = [
(cell.nixago.foo {
configData.qux = "xyz";
packages = [ pkgs.additional-package ];
})
cell.nixago.bar
cell.nixago.quz
];
};
}
Note, that you can extend any Nixago Pebble at the calling site via a built-in functor like in the example above.
mkNixago
This is a transparent convenience proxy for nix-community/nixago
’s lib.${system}.make
function.
It is enriched with a forward contract towards std
enriched mkShell
implementation.
In order to define numtide/devshell
’s commands
& packages
alongside the Nixago pebble,
just add the following attrset to the Nixago spec. It will be picked up automatically by mkShell
when that pebble
is used inside its config.nixago
-option.
{ inputs, cell }: {
foo = inputs.std.lib.dev.mkNixago {
/* ... */
packages = [ /* ... */ ];
commands = [ /* ... */ ];
devshell = { /* ... */ }; # e.g. for startup hooks
};
}
mkArion
This is a transparent convenience proxy for hercules-ci/arion
’s lib.build
function.
However, the arion’s nixos
config option was removed.
As Standard claims to be the integration layer it will not delegate integration via a foreign interface to commissioned tools, such as arion.
This is a bridge towards and from docker-compose users. Making nixos part of the interface would likely alienate that bridge for those users.
If you need a nixos-based container image, please check out the arion source code on how it’s done.
The Ops Library
This library covers operational aspects of the SDLC.
mkMicrovm
provides an interface to microvm
tasks
This is an integration for astro/microvm.nix
.
Usage example
{
inputs,
cell,
}: let
inherit (inputs.std.lib) ops;
in {
# microvm <module>
myhost = ops.mkMicrovm ({ pkgs, lib, ... }: { networking.hostName = "microvms-host";});
}
mkOperable
… is a function interface into the second layer of packaging of the Standard SDLC Packaging pattern.
It’s purpose is to provide an easy way to enrich a “package” into an “operable”.
The function signature is as follows:
/*
Makes a package operable by configuring the necessary runtime environment.
Args:
package: The package to wrap.
runtimeScript: A bash script to run at runtime.
runtimeEnv: An attribute set of environment variables to set at runtime.
runtimeInputs: A list of packages to add to the runtime environment.
livenessProbe: An optional derivation to run to check if the program is alive.
readinessProbe: An optional derivation to run to check if the program is ready.
Returns:
An operable for the given package.
*/
mkOCI
… is a function to generate an OCI Image via nix2container
.
The function signature is as follows:
/*
Creates an OCI container image
Args:
name: The name of the image.
entrypoint: The entrypoint of the image. Must be a derivation.
tag: Optional tag of the image (defaults to output hash)
setup: A list of setup tasks to run to configure the container.
uid: The user ID to run the container as.
gid: The group ID to run the container as.
perms: A list of permissions to set for the container.
labels: An attribute set of labels to set for the container. The keys are
automatically prefixed with "org.opencontainers.image".
options: Additional options to pass to nix2container.buildImage.
Returns:
An OCI container image (created with nix2container).
*/
mkStandardOCI
… is a function interface into the third layer of packaging of the Standard SDLC Packaging pattern.
It produces a Standard OCI Image from an “operable”.
The function signature is as follows:
/*
Creates an OCI container image using the given operable.
Args:
name: The name of the image.
operable: The operable to wrap in the image.
tag: Optional tag of the image (defaults to output hash)
setup: A list of setup tasks to run to configure the container.
uid: The user ID to run the container as.
gid: The group ID to run the container as.
perms: A list of permissions to set for the container.
labels: An attribute set of labels to set for the container. The keys are
automatically prefixed with "org.opencontainers.image".
debug: Whether to include debug tools in the container (coreutils).
options: Additional options to pass to nix2container.
Returns:
An OCI container image (created with nix2container).
*/
The Standard Image
Standard images are minimal and hardened. They only contain required dependencies.
Contracts
The following contracts can be consumed:
/bin/entrypoint # always present
/bin/runtime # always present, drops into the runtime environment
/bin/live # if livenessProbe was set
/bin/ready # if readinessProbe was set
That’s it. There is nothing more to see.
All other dependencies are contained in /nix/store/...
.
The Debug Image
Debug Images wrap the standard images and provide additional debugging packages.
Hence, they are neither minimal, nor hardened because of the debugging packages’ added surface.
Contracts
The following contracts can be consumed:
/bin/entrypoint # always present
/bin/runtime # always present, drops into the runtime environment
/bin/debug # always present, drops into the debugging environment
/bin/live # if livenessProbe was set
/bin/ready # if readinessProbe was set
How to extend?
A Standard or Debug Image doesn’t have a package manager available in the environment.
Hence, to extend the image you have two options:
Nix-based extension
rec {
upstream = n2c.pullImage {
imageName = "docker.io/my-upstream-image";
imageDigest = "sha256:fffff.....";
sha256 = "sha256-ffffff...";
};
modified = n2c.buildImage {
name = "docker.io/my-modified-image";
fromImage = upstream;
contents = [nixpkgs.bashInteractive];
};
}
Dockerfile-based extension
FROM alpine AS builder
RUN apk --no-cache curl
FROM docker.io/my-upstream-image
COPY --from=builder /... /
Please refer to the official dockerfile documentation for more details.
Standard Presets
Standard Presets bring out-of-the-box experiences. Though, by being clearly marked as “presets”, you can ignore them as you please.
Nix Templates
These are opinionated template projects designed to get you kick-started.
You can make use of them through the Nix CLI, via:
$ cd my-new-project
$ nix flake init -t github:divnix/std#<template-name>
Please consult the template section in the docs for an overview.
Nixago Presets
These are out-of-the-box configurations of Nixago Pebbles.
You can amend them the same way they, themeselves, amend the base Nixago Pebbles:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
inherit (inputs.cells) std;
l = nixpkgs.lib // builtins;
in {
adrgen = std.nixago.adrgen {
configData = import ./nixago/adrgen.nix;
};
editorconfig = std.nixago.editorconfig {
configData = import ./nixago/editorconfig.nix;
hook.mode = "copy"; # already useful before entering the devshell
};
conform = std.nixago.conform {
configData = import ./nixago/conform.nix;
};
lefthook = std.nixago.lefthook {
configData = import ./nixago/lefthook.nix;
};
mdbook = std.nixago.mdbook {
configData = import ./nixago/mdbook.nix;
hook.mode = "copy"; # let CI pick it up outside of devshell
packages = [std.packages.mdbook-kroki-preprocessor];
};
treefmt = std.nixago.treefmt {
configData = import ./nixago/treefmt.nix;
packages = [
nixpkgs.alejandra
nixpkgs.nodePackages.prettier
nixpkgs.nodePackages.prettier-plugin-toml
nixpkgs.shfmt
];
devshell.startup.prettier-plugin-toml = l.stringsWithDeps.noDepEntry ''
export NODE_PATH=${nixpkgs.nodePackages.prettier-plugin-toml}/lib/node_modules:$NODE_PATH
'';
};
githubsettings = std.nixago.githubsettings {
configData = import ./nixago/githubsettings.nix;
};
}
just
doesn’t have a preset: for a task runner, it wouldn’t make a lot of sense.
If you have a good idea how to make these presets more useful, please consider to submit a PR.
adrgen
Nixago Presets
{
default_meta = [];
default_status = "proposed";
directory = "docs/explain/architecture-decision-records";
id_digit_number = 4;
supported_statuses = [
"proposed"
"accepted"
"rejected"
"superseded"
"amended"
"deprecated"
];
template_file = "docs/explain/architecture-decision-records/template.md";
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
conform
Nixago Presets
{
commit = {
header = {length = 89;};
conventional = {
types = [
"build"
"chore"
"ci"
"docs"
"feat"
"fix"
"perf"
"refactor"
"style"
"test"
];
scopes = [];
};
};
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
editorconfig
Nixago Presets
{
root = true;
"*" = {
end_of_line = "lf";
insert_final_newline = true;
trim_trailing_whitespace = true;
charset = "utf-8";
indent_style = "space";
indent_size = 2;
};
"*.{diff,patch}" = {
end_of_line = "unset";
insert_final_newline = "unset";
trim_trailing_whitespace = "unset";
indent_size = "unset";
};
"*.md" = {
max_line_length = "off";
trim_trailing_whitespace = false;
};
"{LICENSES/**,LICENSE}" = {
end_of_line = "unset";
insert_final_newline = "unset";
trim_trailing_whitespace = "unset";
charset = "unset";
indent_style = "unset";
indent_size = "unset";
};
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
lefthook
Nixago Presets
{
commit-msg = {
commands = {
conform = {
# allow WIP, fixup!/squash! commits locally
run = ''
[[ "$(head -n 1 {1})" =~ ^WIP(:.*)?$|^wip(:.*)?$|fixup\!.*|squash\!.* ]] ||
conform enforce --commit-msg-file {1}'';
skip = ["merge" "rebase"];
};
};
};
pre-commit = {
commands = {
treefmt = {
run = "treefmt --fail-on-change {staged_files}";
skip = ["merge" "rebase"];
};
};
};
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
mdbook
Nixago Presets
{
book = {
language = "en";
multilingual = false;
src = "docs";
title = "Documentation";
};
build = {
build-dir = "docs/book";
};
preprocessor = {
kroki-preprocessor = {
command = "mdbook-kroki-preprocessor";
};
};
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
treefmt
Nixago Presets
{
formatter = {
nix = {
command = "alejandra";
includes = ["*.nix"];
};
prettier = {
command = "prettier";
options = ["--plugin" "prettier-plugin-toml" "--write"];
includes = [
"*.css"
"*.html"
"*.js"
"*.json"
"*.jsx"
"*.md"
"*.mdx"
"*.scss"
"*.ts"
"*.yaml"
"*.toml"
];
};
shell = {
command = "shfmt";
options = ["-i" "2" "-s" "-w"];
includes = ["*.sh"];
};
};
}
If you have a good idea how to make these presets more useful, please consider to submit a PR.
githubsettings
Nixago Presets
This preset defines some default Github labels to help you organize your project better.
let
colors = {
black = "#000000";
blue = "#1565C0";
lightBlue = "#64B5F6";
green = "#4CAF50";
grey = "#A6A6A6";
lightGreen = "#81C784";
gold = "#FDD835";
orange = "#FB8C00";
purple = "#AB47BC";
red = "#F44336";
yellow = "#FFEE58";
};
labels = {
statuses = {
abandoned = {
name = ":running: Status: Abdandoned";
description = "This issue has been abdandoned";
color = colors.black;
};
accepted = {
name = ":ok: Status: Accepted";
description = "This issue has been accepted";
color = colors.green;
};
blocked = {
name = ":x: Status: Blocked";
description = "This issue is in a blocking state";
color = colors.red;
};
inProgress = {
name = ":construction: Status: In Progress";
description = "This issue is actively being worked on";
color = colors.grey;
};
onHold = {
name = ":golf: Status: On Hold";
description = "This issue is not currently being worked on";
color = colors.red;
};
reviewNeeded = {
name = ":eyes: Status: Review Needed";
description = "This issue is pending a review";
color = colors.gold;
};
};
types = {
bug = {
name = ":bug: Type: Bug";
description = "This issue targets a bug";
color = colors.red;
};
story = {
name = ":scroll: Type: Story";
description = "This issue targets a new feature through a story";
color = colors.lightBlue;
};
maintenance = {
name = ":wrench: Type: Maintenance";
description = "This issue targets general maintenance";
color = colors.orange;
};
question = {
name = ":grey_question: Type: Question";
description = "This issue contains a question";
color = colors.purple;
};
security = {
name = ":cop: Type: Security";
description = "This issue targets a security vulnerability";
color = colors.red;
};
};
priorities = {
critical = {
name = ":boom: Priority: Critical";
description = "This issue is prioritized as critical";
color = colors.red;
};
high = {
name = ":fire: Priority: High";
description = "This issue is prioritized as high";
color = colors.orange;
};
medium = {
name = ":star2: Priority: Medium";
description = "This issue is prioritized as medium";
color = colors.yellow;
};
low = {
name = ":low_brightness: Priority: Low";
description = "This issue is prioritized as low";
color = colors.green;
};
};
effort = {
"1" = {
name = ":muscle: Effort: 1";
description = "This issue is of low complexity or very well understood";
color = colors.green;
};
"2" = {
name = ":muscle: Effort: 3";
description = "This issue is of medium complexity or only partly well understood";
color = colors.yellow;
};
"5" = {
name = ":muscle: Effort: 5";
description = "This issue is of high complexity or just not yet well understood";
color = colors.red;
};
};
};
l = builtins;
in {
labels =
[]
++ (l.attrValues labels.statuses)
++ (l.attrValues labels.types)
++ (l.attrValues labels.priorities)
++ (l.attrValues labels.effort);
}
In order to use this preset, you also need to install Github Settings App.
If you have a good idea how to make these presets more useful, please consider to submit a PR.
Devshells
- The
default
devshell implements the development environment for thestd
TUI/CLI. - Furthermore, it implements a
pre-commit
hook to keep the source code formatted. - It makes use of
std.lib.mkShell
which is a convenience proxy fornumtide/devshell
.
Glossary
Cell
: A Cell is the folder name of the first level under ${cellsFrom}
. They represent a coherent semantic collection of functionality.
Cell Block
: A Cell Block is the specific named type of a Standard (and hence: Flake) output.
Block Type
: A Block Type is the unnamed generic type of a Cell Block and may or may not implement Block Type Actions.
Target
: A Target is the actual output of a Cell Block. If there is only one intended output, it is called default
by convention.
Action
: An Action is a runnable procedure implemented on the generic Block Type type. These are abstract procedures that are valuable in any concrete Cell Block of such Block Type.
The Registry
: The Registry, in the context of Standard and if it doesn’t refer to a well-known external concept, means the .#__std
flake output. This Registry holds different Registers that serve different discovery purposes. For example, the CLI can discover relevant metadata or a CI can discover desired pipeline targets.