Standard
Ship today.
Standard is a nifty DevOps framework that enables an efficient Software Development Life Cycle (SDLC) with the power of Nix via Flakes.
It organizes and disciplines your Nix and thereby speeds you up. It also comes with great horizontal integrations of high quality vertical DevOps tooling crafted by the Nix Ecosystem.
Stack
Integrations
The Standard Story
Once your nix
code has evolved into a giant
ball of spaghetti and nobody else except a few
select members of your tribe can still read it
with ease; and once to the rest of your colleagues
it has grown into an impertinence, then std
brings the overdue order to your piece of art
through a well-defined folder structure and
disciplining generic interfaces.
With std
, you'll learn how to organize your nix
flake outputs ('Targets') into Cells and
Cell Blocks — folded into a useful
CLI & TUI to also make the lives of your
colleagues easier.
Through more intuition and less documentation, your team and community will finally find a canonical answer to the everlasting question: What can I do with this repository?
The Standard NixOS Story (in case you wondered)
Once you got fed up with divnix/digga
or a disorganized personal configuration,
please head straight over to divnix/hive
and join the chat, there. It's work in progress.
But hey! It means: we can progress together!
Getting Started
# flake.nix
{
description = "Description for the project";
inputs = {
std.url = "github:divnix/std";
nixpkgs.follows = "std/nixpkgs";
};
outputs = { std, self, ...} @ inputs: std.growOn {
inherit inputs;
# 1. Each folder inside `cellsFrom` becomes a "Cell"
# Run for example: 'mkdir nix/mycell'
# 2. Each <block>.nix or <block>/default.nix within it becomes a "Cell Block"
# Run for example: '$EDITOR nix/mycell/packages.nix' - see example content below
cellsFrom = ./nix;
# 3. Only blocks with these names [here: "packages" & "shells"] are picked up by Standard
# It's a bit like the output type system of your flake project (hint: CLI & TUI!!)
cellBlocks = with std.blockTypes; [
(installables "packages" {ci.build = true;})
(devshells "shells" {ci.build = true;})
];
}
# 4. Run 'nix run github:divnix/std'
# 'growOn' ... Soil:
# - here, compat for the Nix CLI
# - but can use anything that produces flake outputs (e.g. flake-parts or flake-utils)
# 5. Run: nix run .
{
devShells = std.harvest self ["mycell" "shells"];
packages = std.harvest self ["mycell" "packages"];
};
}
# nix/mycell/packages.nix
{inputs, cell}: {
inherit (inputs.nixpkgs) hello;
default = cell.packages.hello;
}
This Repository
This repository combines the above mentioned stack components into the ready-to-use Standard framework. It adds a curated collection of Block Types for DevOps use cases. It further dogfoods itself and implements utilities in its own Cells.
Dogfooding
Only renders in the Documentation.
inputs: let
inherit (inputs) incl std;
inherit (inputs.paisano) pick harvest;
in
std.growOn {
inherit inputs;
cellsFrom = incl ./src ["local" "tests"];
nixpkgsConfig = {allowUnfree = true;};
cellBlocks = with std.blockTypes; [
## For local use in the Standard repository
# local
(devshells "shells" {ci.build = true;})
(nixago "configs")
(containers "containers")
(namaka "checks" {ci.check = true;})
];
}
{
devShells = harvest inputs.self ["local" "shells"];
checks = harvest inputs.self ["tests" "checks" "snapshots" "check"];
}
(std.grow {
inherit inputs;
cellsFrom = incl ./src ["std" "lib" "data"];
cellBlocks = with std.blockTypes; [
## For downstream use
# std
(runnables "cli" {ci.build = true;})
(functions "devshellProfiles")
(functions "errors")
(data "templates")
# lib
(functions "dev")
(functions "ops")
(anything "cfg")
(data "configs")
];
})
{
packages = harvest inputs.self [["std" "cli"] ["std" "packages"]];
templates = pick inputs.self ["std" "templates"];
}
That's it. std.grow
is a "smart" importer of your nix
code and is designed to keep boilerplate at bay. In the so called "Soil" compatibility layer, you can do whatever your heart desires. For example put flake-utils
or flake-parts
patterns here. Or, as in the above example, just make your stuff play nicely with the Nix CLI.
TIP:
- Clone this repo
git clone https://github.com/divnix/std.git
- Install
direnv
& inside the repo, do:direnv allow
(first time takes a little longer)- Run the TUI by entering
std
(first time takes a little longer)What can I do with this repository?
Documentation
The Documentation is here.
And here is the Book, a very good walk-trough. Start here!
Video Series
Examples in the Wild
This GitHub search query holds a pretty good answer.
Why?
Contributions
Please enter the contribution environment:
direnv allow || nix develop -c "$SHELL"
Licenses
What licenses are used? → ./.reuse/dep5
.
And the usual copies? → ./LICENSES
.
Standard Design and Architecture
At the time of writing, almost a year of exploratory and freestyle project history has passed. Fortunately, it is not necessary for further understanding, so I'll spare you that. This document, though, lays out the design, architecture and direction of Standard.
If the topics discussed herein are dear to you, please take it as an invitation to get involved.
This design document shall be stable and amendments go through a proper process of consideration.
Overview
Standard is a collection of functionality and best practices ("framework") to bootstrap and sustain the automatable sections of the Software Development Lifecycle (SDLC) efficiently with the power of Nix and Flakes. In particular, Standard is a Horizontal* Integration Framework which integrates vertical* tooling.
We occasionally adapt concepts from non-technical contexts. This is one instance.
Vertical Tooling does one thing and does it well in a narrow scope (i.e "vertical").
Horizontal Tooling stitches vertical tooling together to a polished whole.
What is being integrated are the end-to-end automatable sections of the SDLC. For these we curate a collection of functionality, tools and best practices.
An SDLCs efficiency is characterized by two things.
Firstly, by adequate lead time which is the amount of time it takes to set up an initial version of the software delivery pipeline. It needs to be adequate rather than just fast, as it takes place in the context of a team. Rather than for speed, they need optimization for success. For example, a process needs to be documented & explained and your team needs to be trained on it. Standard encourages incremental adoption in order to leave enough space for these paramount activities. If you're in a hurry and your team is onboard, though, you still can jumpstart its adoption.
Secondly, efficient SDLCs are characterized by short cycle times which is the amount of time it takes for a designed feature to be shipped to production. Along this journey, we encounter our scope (more on it below):
- aspects of the development environment;
- the packaging pipeline that produces artifacts;
- and continuous processes integrating the application lifecycle.
Hence, the goal of Standard is to:
- Enable easy and incremental adoption
- Optimize the critical path that reduces your SDLC's cycle time.
Additionally, unlike similar projects, we harness the power of Nix & Flakes to ensure reproducibility.
Goals
- Complete: Standard should cover the important use cases for setting up and running the automatable sections of the SDLC.
- Optimized: Standard should optimize both for the needs of the individual developers and the market success of the product.
- Integrated: Standard should provide the user with a satisfying integration experience across a well-curated assortment of tools and functionality.
- Extensible: Standard should account for the need to seamlessly modify, swap or extend its functionality when necessary.
Please defer to the sales pitch, if you need more context.
Ideals
While we aim to improve the SDLC by applying Nix and its ecoysystem's ingenuity to the problem, we also want to build bridges. In order to bring the powers of store based reproducible packaging to colleagues and friends, we need to maneuver around the ecosystem's pitfalls:
- Use nix only where it is best suited — a Nix maximalist approach may be an innate condition to some of us, but to build bridges we deeply recognize and value other perspectives and don't dismiss them as ignorance.
- Disrupt where disruption is necessary — the Nix ecosystem has a fairly rigid set of principles and norms that we don't think always apply in every use case.
- Look left, right, above and beyond — our end-to-end perspective commands us to actively seek and reach out to other projects and ecosystems to compose our value chain; there's no place for the "not invented here"-syndrome.
Scope
These are big goals and ideals. In the interest of practical advancements, we'll narrow down the scope in this section.
We can subdivide (not break up!) our process into roughly three regions with different shapes and characteristics:
- Development Environment roughly covers code-to-commit.
- Packaging Pipeline roughly covers commit-to-distribution.
- Deployment and Beyond roughly covers distribution-to-next-rollout.
We delegate:
- The Development Environment to a trusted project in the broader Nix Community employing community outreach to promote our cause and ensure it is at least not accidentally sabotaged.
- The Deployment and Beyond by cultivating outreach and dovetailing with initiatives of, among others, the Cloud Native ecosystem.
And we focus on:
- The Packaging Pipeline
- Interfaces and Integration with the other two
Architecture
With clarity about Standard's general scope and direction, let's procede to get an overview over its architecture.
Locating Standard in the SDLC
Where is Standard located in the big picture?
This graphic locates Standard across the SDLC & Application Lifecycle Management (ALM).
But not only that. It also explains how automation in itself is implemented as code, just as the application itself. Therefore, we make a distinction between:
- first order application code (L1); and
- above that, higher order supporting code as exemplified by L2 and L3.
Glossary:
L2 & L3 have no clearly defined meaning. They represent that we may observe multiple layers of higher order code when automating. Examples could be bash scripts, configuration data, platform utility code and more.
Standard's Components and their Value Contribution
What is Standard made of? And how do its components contribute value?
On the left side of the graphic, we show how Standard, like an onion, is build in layers:
Center to Standard is divnix/paisano
.
This flake (i.e. "Nix library") implements two main abstractions: Block Types and Cells.
Block Types are not unlike Rust's traits or Golang's interfaces. They are abstract definitions of artifact classes. Those abstract classes implement shared functionality.
A few examples of artifact classes in our scope are: packages, containers, scripts and manifests, among others. Examples of shared functionality are (a shared implementation of) push on containers and (a shared implementation of) build on packages.
Cells, in turn, organize your code into related units of functionality. Hence, Cells are a code orgnization principle.
On top of Paisano's abstractions, Standard implements within its scope:
- a collection of Block Types; and
- a collection of library functionality organized in Cells.
On the right side of the graphic, we sketch an idea of how these components are put into service for the SDLC.
Paisano (Code Organization)
We already learned about Paisano's two main abstractions: Cells & Block Types.
Cells enable and encourage the user to cleanly organize their code into related units of functionality. The concrete semantics of code layout are completely at her choosing. For example, she could separate application tiers like frontend and backend into their own cells, each. Or she could reflect the microservices architecture in the Cells.
Paisano has a first class concept of Cells. By simply placing a folder in the repository, Paisano will pick it up. In that regard, Paisano is an automated importer, that spares the user the need to setup and maintain boilerplate plumbing code.
Within a Cell, the user groups artifacts within Blocks of an appropriate Block Type. When configuring Standard, she names her Blocks using Standard's Block Types so that Paisano's importer can pick them up, too. By doing that, she also declares the repository's artifact type system to humans and machines.
Machines can make great use of that to interact with the artifact type system in multiple ways. Paisano exports structured json-serializable data about a repository's typed artifacts in its so-called "Paisano Registry". A CLI or TUI, as is bundled with Standard, or even a web user interface can consume, represent and act upon that data.
And so can CI.
In fact, this is an innovation in the SDLC space:
We can devise an implementation of a CI which, by querying Paisano's Registry, autonomously discovers all work that needs to be done.
In order to demonstrate the value of this proposition, we made a reference implementation for GitHub Actions over at divnix/std-action
.
To our knowledge, this is the first and only "zero config" CI implementation based on the principles of artifact typing and code organization.
By using these principles rather than a rigid opinionated structure, it also remains highly flexible and adapts well to the user's preferences & needs.
In summary, all these organization and typing principles enable:
- easy refactoring of your repository's devops namespace;
- intuitive grouping of functionality that encourages well-defined internal boundaries,
- allowing for keeping your automation code clean and maintainable;
- making use of Block Types and the shared library to implement the DRY principle;
- reasoning about the content of your repo through structured data,
- and, thereby, facilitate interesting user interfaces, such as a CLI, TUI or even a UI,
- as well as services like a (close to) zero config, self-updating CI;
- similar organizational principles help to lower the cost of context switching between different projects.
Standard's Block Types (DevOps Type System)
As mentioned above, Standard exploits the Block Type abstraction to provide artifact types for the SDLC. Within the semantics of each Block Type, we implement shared functionality. This is designed to offer the user an optimized, audited implementation. Alleviates the burden of devising "yet another" local implementation of otherwise well-understood generic functionality, such as, the building of a package or the pushing of a container image.
Standard's Cells (Function Library)
Alongside the Packaging Pipeline, Standard provides a curated assortment of library functions and integrations that users can adopt. While optional, an audited and community maintained function library and its corresponding documentation fulfills the promise of productivity, shared mental models and ease of adoption.
Modularity & Virality Model
We aim to provide a public registry in which we index and aggregate additional Block Types and Cells from the Standard user community that are not maintained in-tree. To boost its value, aggregate documentation will be part of that registry. We need to decide on how to deeply integrate documentation concerns, such as structured docstrings & adjacent readmes, into the framework.
Value Matrix
This section will explain how Standard intends to create value for different stakeholders. It is essential to have an idea of who they are, so let's introduce:
The Software Sponsor Makes resources available in return for the expectation of future benefits.
The Provider of Automation Sets up and maintains the automation along the SDLC. A helpful analogy would be the person who sets up and maintains the conveyor belt which moves features to production.
The Consumer of Automation Consumes and co-maintaines the automation along the SDLC. A helpful analogy would be that this person not only uses and configures our conveyor belt, but is also capable of occasionally maintaining it.
It is essential to have an understanding of what they value, so let's try to get an overview. We'll make use of a high level value matrix with simple sentiment scores:
- 😍 → "absolutely love it!!!"
- 😄 → "feels pretty good."
- 😐 → "whatever?!?"
Software Sponsor [Principal] | Provider of SDLC Automation [Agent] | Consumer of SDLC Automation [Agent] | |
---|---|---|---|
Productivity | 😍 | 😍 | 😄 |
Code Organization | 😄 | 😍 | 😄 |
Mental Model & Learning | 😄 | 😍 | 😄 |
Batteries Included | 😐 | 😍 | 😐 |
Community and Ecosystem | 😐 | 😍 | 😄 |
Reproducibility & Software Supply Chain Security | 😍 | 😍 | 😐 |
Modularity & Incremental Adoption | 😄 | 😄 | 😍 |
Developer Experience & Onboarding Story | 😄 | 😄 | 😍 |
So, this is for you and your team, if you:
- Care about reproducibility for more reliablility throughout your software development
- Value clean code for keeping a check on techincal debt and increased long-term maintainability
- Have a deadline to meet with the help of the includes best practices and batteries
- Want to serve an optimized UX to your colleagues via a repo CLI / TUI and (close to) zero-config CI
Selling Points
The main selling points of Standard are:
-
Efficiency: Standard automates the software delivery lifecycle, making the process more efficient and streamlined.
-
Reproducibility: Standard's emphasis on reproducibility ensures that every stage of the SDLC can be easily replicated, leading to a more consistent and reliable software development process.
-
Speed: Standard optimizes the critical path of the SDLC journey to achieve superior cycle times, which means that your software can be shipped to production faster.
-
Flexibility: Standard is built to be flexible and adaptable, which allows it to be used in a variety of different contexts and industries.
-
Cost-effective: Automating the software delivery lifecycle with Standard saves time and resources, making it more cost-effective.
-
Integration: Standard is a horizontal integration framework that integrates vertical tooling, making it easier to stitch together different tools and processes to create a polished whole.
-
Community Outreach: Standard is a part of the Nix ecosystem and is committed to community outreach to ensure that its optimization targets are met and that other perspectives are not dismissed.
These points show how Standard can help adopters to improve their software delivery process, and how it can save them time, money and improve the quality of their software.
Comparing Standard to X
Where appropriate, we compare with divnix/paisano
, instead.
Comparison with tools in the Nix ecosystem
flake-utils
numtide/flake-utils
is a small & lightweight utility with a focus on generating flake file outputs in accordance with the packaging and NixOS use cases built into the Nix CLI tooling.
Paisano, in turn, is an importer with a focus on code organization.
Like Flake Utils, it, too, was designed to be used inside the flake.nix
file.
However, flake.nix
is a repository's prime estate.
And so Paisano was optimized for keeping that estate as clean as possible and, at the same time, beeing a useful table of content even to a relative nix-layman.
While you can use it to match the schema that Nix CLI expects, it also enables more flexibility as it is not specially optimized for any particular use case.
flake-parts
hercules-ci/flake-parts
is a component aggregator with a focus on a flake schema that is built into the Nix CLI tooling that makes use of the NixOS module system for composability and aggregation.
Paisano, in turn, is an importer with a focus on code organization; it still plugs well into a flake.nix
file, but also preserves its index function by keeping it clean.
While you can use it to match the schema that Nix CLI expects, it also enables more flexibility as it is not specially optimized for any particular use case.
To a lesser extent, Paisano is also a component aggregator for your flake outputs. However, in that role, it gives you back the freedom to use the output schema that best fits your problem domain.
The core tenet of Flake Parts is domain specific interfaces for each use case. Flake Parts implements and aggregates these interfaces based on the NixOS module system.
Paisano, in turn, focuses on code organization along high level code boundaries connected by generic interfaces. The core tenet of Paisano remains Nix's original functional style.
Convergence towards the Flakes output schema is provided via the harvester family of utility functions (winnow
, harvest
& pick
).
Depending on the domain schema, it can be a lossy convergence, though, due the lesser expressivity of the flake output schema.
Example usage of harvester functions
{
inputs = { /* snip */ };
outputs = { std, self, ...}:
growOn {
/* snip */
}
{
devShells = std.harvest self ["automation" "shells"];
packages = std.harvest self [["std" "cli"] ["std" "packages"]];
templates = std.pick self ["presets" "templates"];
};
}
Devshell
Standard wraps numtide/devshell
to improve the developer experience in the early parts of the SDLC via reproducible development shells.
Comparison with other tools & frameworks
My language build tool
Nix wraps language level tooling into a sandbox and cross-language build graph to ensure reproducibility. Most languages are already covered.
Bazel
Bazel is similar to Nix in that it creates cross-language build graphs. However, it does not guarantee reproducibility. Currently it has more advanced build caching strategies: a gap that the Nix community is very eager to close soon.
My CI/CD
Any CI can leverage Paisano's Registry to discover work. Implementations can either be native to the CI or provided via CI-specific wrappers, a strategy chosen, for example, by our reference implementation for GitHub Actions.
Roadmap
The freestyle history of the project was productive during the alpha stage, but as the code stabilizes, so too must our processes.
This roadmap gives an overview of the short and mid term direction that the project aims to take.
Deliverable Categories
We've identified a couple of deliverable categories in line with the architectural overview.
These help us to better understand the work spectrum associated with the project.
Process Categories
To run automation we have to set it up, first. We should keep that in mind when working on the backlog and therefore classify:
- Setup
- Automation
For setup, besides function libraries, a variety of supporting material is crucial, such as:
- Documentation & Instructions
- Patterns & Shared Mental Models
- Quick Start Templates
- Onboarding & Learning Content
Process Regions
Per our architectural overview, we distinguish these process regions:
- Development Environment
- Build Pipeline
- Deployment and Beyond (Application Lifecycle Management)
Deliverable Types
- Docs
- CLI commands or TUI helpers
- Integrations
- Library functions
- Block Types
- Stable interfaces
- Community outreach
Milestone v1
With the above in mind, the issue backlog will be regularly groomed and prioritized. This is an aid for the core contributors, but it may also provide the necessary orientation to get new contributors set up.
Welcome!
A walk in the park
This is an excellent tutorial series by Joshua Gilman in the form of The Standard Book.
It is ideal for people with prior Nix and Nix Flakes experience.
They are written in a way that feels like a walk in the park, hence the nickname.
They are also often used to dogfood some new std
functionality and document it alongside in a palatable (non-terse) writing style.
Enjoy!
Hello World
Standard features a special project structure
that brings some awesome innovation
to this often overlooked (but important) part of your project.
With the default Cell Blocks, an apps.nix
file tells Standard
that we are creating an Application.
flake.nix
is in charge
of explicitly defining
the inputs of your project.
Btw, you can can copy * the following files from here.
* don't just clone the
std
repo: flakes in subfolders don't work that way.
/tmp/play-with-std/hello-world/flake.nix
{
inputs.std.url = "github:divnix/std";
inputs.nixpkgs.url = "nixpkgs";
outputs = {std, ...} @ inputs:
std.grow {
inherit inputs;
cellsFrom = ./cells;
};
}
/tmp/play-with-std/hello-world/cells/hello/apps.nix
{
inputs,
cell,
}: {
default = inputs.nixpkgs.stdenv.mkDerivation rec {
pname = "hello";
version = "2.10";
src = inputs.nixpkgs.fetchurl {
url = "mirror://gnu/hello/${pname}-${version}.tar.gz";
sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
};
};
}
$ cd /tmp/play-with-std/hello-world/
$ git init && git add . && git commit -m"nix flakes only can see files under version control"
# fetch `std`
$ nix shell github:divnix/std
$ std //hello/apps/default:run
Hello, world!
You see? from nothing to running your first application in just a few seconds ✨
Assumptions
This example consumes the following defaults or builtins:
[Default cellBlocks
][grow-nix-default-cellblocks]
{
cellBlocks ? [
(blockTypes.functions "library")
(blockTypes.runnables "apps")
(blockTypes.installables "packages")
],
...
} @ args:
[Default systems
][grow-nix-default-systems]
{
systems ? [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
],
...
} @ cfg:
Hello Moon
A slightly more complete hello world tutorial.
This tutorial implements a very typical local
Cell and its Cell Blocks for a somewhat bigger project.
It also makes use of more advanced functions of std
.
Namely:
std.growOn
instead ofstd.grow
std.harvest
to provide compatibility layers of "soil"- non-default Cell Block definitions
- the input debug facility
The terms "Block Type", "Cell", "Cell Block", "Target" and "Action" have special meaning within the context of std
.
With these clear definitions, we navigate and communicate the code structure much more easily.
In order to familiarize yourself with them, please have a quick glance at the glossary.
File Layout
Let's start again with a flake:
./flake.nix
{
inputs.std.url = "github:divnix/std";
inputs.nixpkgs.url = "nixpkgs";
outputs = {std, ...} @ inputs:
/*
brings std attributes into scope
namely used here: `growOn`, `harvest` & `blockTypes`
*/
with std;
/*
grows a flake "from cells" on "soil"; see below...
*/
growOn {
/*
we always inherit inputs and expose a deSystemized version
via {inputs, cell} during import of Cell Blocks.
*/
inherit inputs;
/*
from where to "grow" cells?
*/
cellsFrom = ./nix;
/*
custom Cell Blocks (i.e. "typed outputs")
*/
cellBlocks = [
(blockTypes.devshells "shells")
(blockTypes.nixago "nixago")
];
/*
This debug facility helps you to explore what attributes are available
for a given input until you get more familiar with `std`.
*/
debug = ["inputs" "std"];
}
/*
Soil is an idiom to refer to compatibility layers that are recursively
merged onto the outputs of the `std.grow` function.
*/
# Soil ...
# 1) layer for compat with the nix CLI
{
devShells = harvest inputs.self ["local" "shells"];
}
# 2) there can be various layers; `growOn` is a variadic function
{};
}
This time we specified cellsFrom = ./nix;
.
This is gentle so that our colleagues know immediately which files to either look or never look at depending on where they stand.
We also used std.growOn
instead of std.grow
so that we can add compatibility layers of "soil".
Furthermore, we only defined two Cell Blocks: nixago
& devshells
. More on them follows...
./nix/local/*
Next, we define a local
cell.
Each project will have some amount of automation.
This can be repository automation, such as code generation.
Or it can be a CI/CD specification.
In here, we wire up two tools from the Nix ecosystem: numtide/devshell
& nix-community/nixago
.
Please refer to these links to get yourself a quick overview before continuing this tutorial, in case you don't know them, yet.
A very short refresher:
- Nixago: Template & render repository (dot-)files with nix. Why nix?
- Devshell: Friendly & reproducible development shells — the original ™.
Some semantic background:
Both, Nixago & Devshell are Component Tools.
(Vertical) Component Tools are distinct from (Horizontal) Integration Tools — such as
std
— in that they provide a specific capability in a minimal linux style: "Do one thing and do it well."Integration Tools however combine them into a polished user story and experience.
The Nix ecosystem is very rich in component tools, however only few integration tools exist at the time of writing.
./nix/local/shells.nix
Let's start with the cell.devshells
Cell Block and work our way backwards to the cell.nixago
Cell Block below.
More semantic background:
I could also reference them as
inputs.cells.local.devshells
&inputs.cells.local.nixago
.But, because we are sticking with the local Cell context, we don't want to confuse the future code reader. Instead, we gently hint at the locality by just referring them via the
cell
context.
{
inputs,
cell,
}: let
/*
I usually just find it very handy to alias all things library onto `l`...
The distinction between `builtins` and `nixpkgs.lib` has little practical
relevance, in most scenarios.
*/
l = nixpkgs.lib // builtins;
/*
It is good practice to in-scope:
- inputs by *name*
- other Cells by their *Cell names*
- the local Cell Blocks by their *Block names*.
However, for `std`, we make an exeption and in-scope, despite being an
input, its primary Cell with the same name as well as the dev lib.
*/
inherit (inputs) nixpkgs;
inherit (inputs.std) std lib;
inherit (cell) nixago;
in
# we use Standard's mkShell wrapper for its Nixago integration
l.mapAttrs (_: lib.dev.mkShell) {
default = {...}: {
name = "My Devshell";
# This `nixago` option is a courtesy of the `std` horizontal
# integration between Devshell and Nixago
nixago = [
# off-the-shelve from `std`
(lib.cfg.conform {data = {inherit (inputs) cells;};})
lib.cfg.lefthook
lib.cfg.adrgen
# modified from the local Cell
nixago.treefmt
nixago.editorconfig
nixago.mdbook
];
# Devshell handily represents `commands` as part of
# its Message Of The Day (MOTD) or the built-in `menu` command.
commands = [
{
package = nixpkgs.reuse;
category = "legal";
/*
For display, reuse already has both a `pname` & `meta.description`.
Hence, we don't need to inline these - they are autodetected:
name = "reuse";
description = "Reuse is a tool to manage a project's LICENCES";
*/
}
];
# Always import the `std` default devshellProfile to also install
# the `std` CLI/TUI into your Devshell.
imports = [std.devshellProfiles.default];
};
}
The nixago = [];
option in this definition is a special integration provided by the Standard's devshell
-wrapper (std.lib.mkShell
).
This is how std
delivers on its promise of being a (horizontal) integration tool that wraps (vertical) component tools into a polished user story and experience.
Because we made use of std.harvest
in the flake, you now can actually test out the devshell via the Nix CLI compat layer by just running nix develop -c "$SHELL"
in the directory of the flake.
For a more elegant method of entering a development shell read on the direnv section below.
./nix/local/nixago.nix
As we have seen above, the nixago
option in the cell.devshells
Cell Block references Targets from both lib.cfg
.
While you can explore lib.cfg
here, let's now have a closer look at cell.nixago
:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
inherit (inputs.std) lib;
/*
While these are strictly specializations of the available
Nixago Pebbles at `lib.cfg.*`, it would be entirely
possible to define a completely new pebble inline
*/
in {
/*
treefmt: https://github.com/numtide/treefmt
*/
treefmt = lib.cfg.treefmt {
# we use the data attribute to modify the
# target data structure via a simple data overlay
# (`divnix/data-merge` / `std.dmerge`) mechanism.
data.formatter.go = {
command = "gofmt";
options = ["-w"];
includes = ["*.go"];
};
# for the `std.lib.dev.mkShell` integration with nixago,
# we also hint which packages should be made available
# in the environment for this "Nixago Pebble"
packages = [nixpkgs.go];
};
/*
editorconfig: https://editorconfig.org/
*/
editorconfig = lib.cfg.editorconfig {
data = {
# the actual target data structure depends on the
# Nixago Pebble, and ultimately, on the tool to configure
"*.xcf" = {
charset = "unset";
end_of_line = "unset";
insert_final_newline = "unset";
trim_trailing_whitespace = "unset";
indent_style = "unset";
indent_size = "unset";
};
"{*.go,go.mod}" = {
indent_style = "tab";
indent_size = 4;
};
};
};
/*
mdbook: https://rust-lang.github.io/mdBook
*/
mdbook = lib.cfg.mdbook {
data = {
book.title = "The Standard Book";
};
};
}
In this Cell Block, we have been modifying some built-in convenience lib.cfg.*
pebbles.
The way data
is merged upon the existing pebble is via a simple left-hand-side/right-hand-side data-merge
(std.dmerge
).
Background on array merge strategies:
If you know how a plain data-merge (does not magically) deal with array merge semantics, you noticed: We didn't have to annotate our right-hand-side arrays in this example because we where not actually amending or modifying any left-hand-side array type data structure.
Would we have done so, we would have had to annotate:
- either with
std.dmerge.append [/* ... */]
;- or with
std.dmerge.update [ idx ] [/* ... */]
.But lucky us (this time)!
Command Line Synthesis
With this configuration in place, you have a couple of options on the command line.
Note, that you can access any std
cli invocation also via the std
TUI by just typing std
.
Just in case you forgot exactly how to access one of these repository capabilities.
Debug Facility:
Since the debug facility is enabled, you will see some trace output while running these commands. To switch this off, just comment the
debug = [ /* ... */ ];
attribute in the flake.It looks something like this:
trace: inputs on x86_64-linux trace: { cells = {…}; nixpkgs = {…}; self = {…}; std = {…}; }
Invoke devshell via nix
nix develop -c "$SHELL"
By quirks of the Nix CLI, if you don't specify -c "$SHELL"
, you'll be thrown into an unfamiliar bare bash
interactive shell.
That's not what you want.
Invoke the devshell via std
In this case, invoking $SHELL
correctly is taken care for you by the Block Type's enter
Action.
# fetch `std`
$ nix shell github:divnix/std
$ std //local/devshells/default:enter
Since we have declared the devshell Cell Block as a blockTypes.devshells
, std
augments it's Targets with the Block Type Actions.
See blockTypes.devshells
for more details on the available Actions and their implementation.
Thanks to the cell.devshells
' nixago
option, entering the devshell will also automatically reconcile the repository files under Nixago's management.
Explore a Nixago Pebble via std
You can also explore the nixago configuration via the Nixago Block Type's explore
-Action.
# fetch `std`
$ nix shell github:divnix/std
$ std //local/nixago/treefmt:explore
See blockTypes.nixago
for more details on the available Actions and their implementation.
direnv
Manually entering the devshell is boring.
How about a daemon always does that automatically & efficiently when you cd
into a project directory?
Enter direnv
— the original (again; and even from the same author) 😊.
Before you continue, first install direnv according to it's install instructions. It's super simple & super useful ™ and you should do it right now if you haven't yet.
Please learn how to enable direnv
in this project by following the direnv how-to.
In this case, you would adapt the relevant line to: use std nix //local/shells:default
.
Now, you can simply cd
into that directory, and the devshells is being loaded.
The MOTD will be shown, too.
The first time, you need to teach the direnv
daemon to trust the .envrc
file via direnv allow
.
If you want to reload the devshell (e.g. to reconcile Nixago Pebbles), you can just run direnv reload
.
Because I use these commands so often, I've set: alias d="direnv"
in my shell's RC file.
Growing Cells
Growing cells can be done via two variants:
std.grow { cellsFrom = "..."; /* ... */ }
std.growOn { cellsFrom = "..."; /* ... */ } # soil
std.growOn {} # soil
This eases talking and reasoning about a std
ized repository, that also needs
some sort of adapters to work together better with external frameworks.
Typically, you'd arrange those adapters in numbered layers of soil, just so that it's easier to conceptually reference them when talking / chatting.
It's a variadic function and takes an unlimited number of "soil layers".
{
inputs.std.url = "github:divnix/std";
outputs = {std, ...} @ inputs:
std.growOn {
inherit inputs;
cellsFrom = ./cells;
}
# soil
() # first layer
() # second layer
() # ... nth layer
;
}
These layers get recursively merged onto the output of std.grow
.
Include Filter
It is very common that you want to filter your source code in order to avoid unnecessary rebuilds and increase your cache hits.
This is so common that std
includes a tool for this:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs;
inherit (inputs) std;
in {
backend = nixpkgs.mkYarnPackage {
name = "backend";
src = std.incl (inputs.self + /src/backend) [
(inputs.self + /src/backend/app.js)
(inputs.self + /src/backend/config/config.js)
/* ... */
];
};
}
Setup .envrc
Standard provides an extension to the stdlib
via direnv_lib.sh
.
The integrity hash below ensures it is downloaded only once and cached from there on.
#!/bin/sh
# first time
if [[ ! -d $(nix store add-path --name source --dry-run .) ]]; then
nix store add-path --name source .
(cd ./src/local && nix flake lock --update-input std)
(cd ./src/tests && nix flake lock --update-input std)
fi
# shellcheck disable=SC1090
. "$(fetchurl "https://raw.githubusercontent.com/paisano-nix/direnv/main/lib" "sha256-IgQhKK7UHL1AfCUntJO2KCaIDJQotRnK2qC4Daxk+wI=")"
use envreload //local/shells/default //local/configs
NOTE: In the above code
use std
cells
//std/...
refers to the folder where Cells are grown from. If your folder is e.g.nix
, adapt touse std
nix
//...
and so forth.
It is used to automatically set up file watches on files that could modify the current devshell, discoverable through these or similar logs during loading:
direnv: loading https://raw.githubusercontent.com/divnix/std/...
direnv: using std cells //local/shells:default
direnv: Watching: cells/local/shells.nix
direnv: Watching: cells/local/shells (recursively)
For reference, the above example loads the default
devshell from:
{
inputs,
cell,
}: let
inherit (inputs) nixpkgs namaka;
inherit (inputs.nixpkgs.lib) mapAttrs optionals;
inherit (inputs.std) std;
inherit (inputs.std.lib.dev) mkShell;
inherit (cell) configs;
in
mapAttrs (_: mkShell) rec {
default = {...}: {
name = "Standard";
nixago = [
configs.conform
configs.treefmt
configs.editorconfig
configs.githubsettings
configs.lefthook
configs.adrgen
configs.cog
];
commands =
[
{
package = nixpkgs.reuse;
category = "legal";
}
{
package = nixpkgs.delve;
category = "cli-dev";
name = "dlv";
}
{
package = nixpkgs.go;
category = "cli-dev";
}
{
package = nixpkgs.gotools;
category = "cli-dev";
}
{
package = nixpkgs.gopls;
category = "cli-dev";
}
{
package = namaka.packages.default;
category = "nix-testing";
}
]
++ optionals nixpkgs.stdenv.isLinux [
{
package = nixpkgs.golangci-lint;
category = "cli-dev";
}
];
imports = [std.devshellProfiles.default book];
};
book = {...}: {
nixago = [configs.mdbook];
};
}
Why nix
?
A lot of people write a lot of confusing stuff about nix.
So here, we'll try to break it down, instead.
nix
is "json
on steroids"
In configuration management, you have a choice: data vs. language.
On stackoverflow, you'll be taught the "data" stance, because it's simple.
And all of a sudden you hit reality. Outside of a "lab" environment, you suddenly need to manage a varying degree of complexity.
So you need configuration combinators, or in other words a full blown language to efficiently render your configurations.
There are a couple of options, that you'll recognize if you've gotten serious about the configuration challenge, like:
And there is nix
, the language. In most aspects, it isn't hugely distinct from the others,
but it has superpowers. Read on!
nix
' superpowers
You know the concept of string interpolation.
Every time nix
interpolates an identifier, there is something that
you don't immediately see: it keeps a so called "string context" right
at the site of interpolation. That string context holds a directed acyclic
graph of all the dependencies that are required to make that string.
"Well, it's just a string; what on earth should I need to make a string?", you may say.
There is a special category of strings, so called "Nix store paths"
(strings that start with /nix/store/...
). These store paths represent
build artifacts that are content addressed ahead-of-time through
the inputs of an otherwise pure build function, called derivation
.
When you finally reify (i.e. "build") your string interpolation, then all these Nix store paths get build as well.
This might be a bit of a mind-boggling angle, but after a while, you may realize:
- Nix is a massive build pipeline that tracks all things to their source.
- In their capacity as pure build functions,
derviation
s build reproducibly. - Reproducible builds are the future of software supply chain security, among other things.
- You'll start asking: "who the heck invented all that insecure nonsense of opaque binary registries? Shouldn't have those smart people have known better?"
- And from this realization, there's no coming back.
- And you'll have joined the European Union, banks and blockchain companies who also realized: we need to fix our utterly broken and insecure build systems!
- By that time, you'll have already assimilated the legendary Ken Thompson's "Reflections on Trusting Trust".
Why std?
Problem
Nix is marvel to ones and cruelty to others.
Much of this professional schism is due to two fundamental issues:
- Nix is a functional language without typing
- Therefore, Nix-enthusiast seem to freaking love writing the most elegant and novel boilerplate all over again the next day.
The amount of domain specific knowledge required to untangle those most elegant and novel boilerplate patterns prevent
the other side of the schism, very understandably, to see through the smoke the true beauty and benefits of nix
as a
build and configuration language.
Lack of typing adds to the problem by forcing nix
-practitioners to go out of their way (e.g. via divnix/yants
) to
add some internal boundaries and contracts to an ever morphing global context.
As a consequence, few actually do that. And contracts across internal code boundaries are either absent or rudimentary or — yet again — "elegant and novel". Neither of which satisfactorily settles the issue.
Solution
std
doesn't add language-level typing. But a well-balanced folder layout cut at 3 layers of conceptual
nesting provides the fundamentals for establishing internal boundaries.
Cell → Cell Block → Target → [Action]
Where ...
- Cells group functionality.
- Cell Blocks type outputs and implement Actions.
- Targets name outputs.
Programmers are really good at pattern-abstraction when looking at two similar but slightly different things: Cells and Cell Blocks set the stage for code readability.
Cell Blocks only allow one possible interface: {inputs, cell}
:
cell
the local Cell, promoting separation of concerninputs
thedeSystemize
ed flake inputs — plus:inputs.self = self.sourceInfo;
reference source code innix
; filter withstd.incl
; don't misuse the globalself
.inputs.cells
: the other cells by name; code that documents its boundaries.inputs.nixpkgs
: an instantiatednixpkgs
for the current system;
Now, we have organized nix
code. Still, nix
is not for everybody.
And for everybody else the std
TUI/CLI companion answers a single question to perfection:
The GitOps Question:
What can I actually do with this std
-ized repository?
The Standard Answer:
std
breaks down GitOps into a single UX-optimized TUI/CLI entrypoint.
Benefit
Not everybody is going to love nix
now.
But the ones, who know its secrets, now have an effective tool to more empathically spark the joy.
Or simply: 💔 → 🧙 → 🔧 → ✨→ 🏖️
The smallest common denominator, in any case:
Only ever install a single dependency (
nix
) and reach any repository target. Reproducibly.
Architecture Decision Record
An architecture decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences.
The template has all the info.
Usage
To interact with this ADR, enter the devshell and interact though the adrgen
tool.
1. Adopt semi-conventional file locations
Date: 2022-03-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Repository navigation is among the first activities to build a mental model of any given repository.
The Nix Ecosystem has come up with some weak conventions: these are variations that are mainly informed by the nixpkgs
repository, itself.
Despite that, users find it difficult to quickly "wrap their head" around a new project.
This is often times a result of an organically grown file organization that has trouble keeping up with growing project semantics.
As a result, onboading onto a "new" nix project even within the same organizational context, sometimes can be a very frustrating and time-consuming activity.
Decision
What is the change that we're proposing and/or doing?
A semi-conventional folder structure shall be adopted.
That folder structure shall have an abstract organization concept.
At the same time, it shall leave the user maximum freedom of semantics and naming.
Hence, 3 levels of organization are adopted. These levels correspond to the abstract organizational concepts of:
- consistent collection of functionality ("what makes sense to group together?")
- repository output type ("what types of gitops artifacts are produced?")
- named outputs ("what are the actual outputs?")
Consequences
What becomes easier or more difficult to do because of this change?
With this design and despite complete freedom of concrete semantics, a prototypical mental model can be reused across different projects.
That same prototypical mental model also speeds up scaffolding of new content and code.
At the expense of nested folders, it may still be further expanded, if additional organization is required.
All the while that the primary meta-information about a project is properly communicated through these first three levels via the file system api, itself (think ls
/ rg
/ fd
).
On the other hand, this rigidity is sometimes overkill and users may resort to filler names such as "default
", because a given semantic only produces singletons.
This is acceptable, however, because this parallellity in addressing even these singleton values trades for very easy expansion or refactoring, as the meta-models of code organization already align.
2. Restrict the calling interface
Date: 2022-03-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
The Nix Ecosystem has optimized for contributor efficiency at the expense of local code readibility and local reasoning.
Over time, the callPackage
idiom was developed that destructures arbitrary attributes of an 80k upstream attributeset provided by nixpkgs
.
A complicating side condition is added, where overlays modify that original upstream packages set in arbitrary ways.
This is not a problem for people, who know nixpkgs by heart and it is not a problem for the author either.
It is a problem for the future code reader, Nix expert or less so, who needs to grasp the essence of "what's going on" under a productivity side condition.
Local reasoning is a tried and tested strategy to help mitigate those issues.
In a variant of this problem, we observe only somewhat convergent, but still largely diverging styles of passing arguments in general across the repository context.
Decision
What is the change that we're proposing and/or doing?
Encourage local reasoning by always fully qualifing identifiers within the scope of a single file.
In order to do so, the entry level nix files of this framework have exactly one possible interface: {inputs, cell}
.
inputs
represent the global inputs, whereas cell
keeps reference to the local context.
A Cell is the first ordering priciple for "consistent collection of functionality".
Consequences
What becomes easier or more difficult to do because of this change?
This restricts up to the prescribed 3 layers of organization the notion of "how files can communicate with each other".
That inter-files-interface is the only global context to really grasp, and it is structurally aligned across all Standard projects.
By virtue of this meta model of a global context and inter-file-communications, for a somewhat familiarized code reader the barriers to local reasoning are greatly reduced.
The two context references are well known (flake inputs & cell-local blocks) and easily discoverable.
For authors, this schema takes away any delay that might arise out of the consideration of how to best structure that inter-file-communication schema.
Out of experience, a significant and low value (and ad-hoc) design process can be leap-frogged via this guidance.
3. Hide system for mortals
Date: 2022-04-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
In the context of DevOps (Standard is a DevOps framework), cross compilation is a significatly lesser concern, than what it is for packagers.
The pervasive use of system
in the current Nix (and foremost Flakes) Ecosystem is an optimization (and in part education) choice for these packagers.
However, in the context of DevOps, while not being irrelevant, it accounts for a fair share of distraction potential.
This ultimately diminishes code-readibility and reasoning; and consequentially adoption. Especially in those code paths, where system
is a secondary concern.
Decision
What is the change that we're proposing and/or doing?
De-systemize everything to the "current" system and effectively hiding the explict manipulation from plain sight in most cases.
An attribute set, that differentiates for systems on any given level of its tree, is deSystemized
.
This means that all child attributes of the "current" system are lifted onto the "system"-level as siblings to the system attributes.
That also means, if explicit reference to system
is necessary, it is still there among the siblings.
The "current" system is brought into scope automatically, however.
What "current" means, is an early selector ("select early and forget"), usually determined by the user's operating system.
Consequences
What becomes easier or more difficult to do because of this change?
The explicit handling of system
in foreign contexts, where system
is not a primary concern is largely eliminated.
This makes using this framework a little easier for everybody, including packaging experts.
Since nixpkgs
, itself, exposes nixpkgs.system
and packaging without nixpkgs
is hardly imaginably, power-users still enjoy easy access to the "current" system, in case it's needed.
4. Early select system for conceptual untangling
Date: 2022-04-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Building on the previous ADR, we saw why we hide system
from plain sight.
In that ADR, we mention "select early and forget" as a strategy to scope the current system consistently across the project.
The current best practices for flakes postulate system
as the second level selector of an output attribute.
For current flakes, type primes over system.
However, this design choice makes the lema "select early and forget" across multiple code-paths a pain to work with.
This handling is exacerbated by the distinction between "systemized" and "non-systemized" (e.g. lib
) output attributes.
In the overall set of optimization goals of this framework, this distinction is of extraordinarily poor value, more so, that function calls are memoized during a single evaluation, which renders the system selector computationally irrelevant where not used.
Decision
What is the change that we're proposing and/or doing?
- Move the
system
selector from the second level to the first level. - Apply the
system
selector regardless and without exception.
Consequences
What becomes easier or more difficult to do because of this change?
The motto "select early and forget" makes various code-paths easier to reason about and maintain.
The Nix CLI completion won't respond gracefully to these changes. However, the Nix CLI is explicitly not a primary target of this framework. The reason for this is that the use cases for the Nix CLI are somewhat skewed towards the packager use case, but in any case are (currently) not purpose built for the DevOps use case.
A simple patch to the Nix binary, can mitigate this for people whose muscle memory prefers the Nix CLI regardless. If you've already got that level of muscle memory, its meandering scope is probably anyways not an issue for you anymore.
5. Nixpkgs is still special, but not too much
Date: 2022-05-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
In general, Standard wouldn't treat any intput as special.
However, no project that requires source distributions of one of the 80k+ packages available in nixpkgs
can practically do without it.
Now, nixpkgs
has this weird and counter-intuitive mouthful of legacyPackages
, which was originally intended to ring an alarm bell and, for the non-nix-historians, still does.
Also, not very many other package collections adopt this idiom which makes it pretty much a singularity of the Nix package collection (nixpkgs
).
Decision
What is the change that we're proposing and/or doing?
If inputs.nixpkgs
is provided, in-scope legacyPackages
onto inputs.nixpkgs
, directly.
Consequences
What becomes easier or more difficult to do because of this change?
Users of Standard access packages as nixpkgs.<package-name>
.
Users that want to interact with nixos, do so by loading nixos = import (inputs.nixpkgs + "/nixos");
or similar.
The close coupling of the Nix Package Collection and NixOS now is broken.
This suites well the DevOps use case, which is not primarily concerned with the unseparable union of the Nix Packages Collection and NixOS.
It rather presents a plethora of use cases that content with the Nix Package Collection, alone, and where NixOS would present as a distraction.
Now, this separation is more explicit.
As another consequence of not treating nixpkgs
(or even the packaging use case) special is that Standard does not implement primary support for overlays
.
6. Avoid fix-point logic, such as overlays
Date: 2022-05-01
Status
accepted
Context
What is the issue that we're seeing that is motivating this decision or change?
Fix point logic is marvelously magic and also very practical.
A lot of people love the concept of nixpkgs
's overlays
.
However, we've all been suckers in the early days, and fix point logic wasn't probably one of the concepts that we grasped intuitivly and right at the beginning of our Nix journey.
The concept of recursivity all in itself is already demanding to reason about, where the concept of recourse-until-not-more-possible is even more mind-boggling.
Fix points are also clear instances of overloading global context.
And global context is a double edged sword between high-productivity for that one who has a good mental model of it and nightmare for that one who has to resort to local reasoning.
Decision
What is the change that we're proposing and/or doing?
In the interest of balancing productivity (for the veteran) and ease-of-onboarding (for the novice), we do not implement a prime support for fix-point logic, such as overlays
at the framework level.
Consequences
What becomes easier or more difficult to do because of this change?
Users who depend on it, need to scope its use to a particular Cell Block.
For the Nix package collection, users can do, for example: nixpkgs.appendOverlays [ /* ... */ ]
.
There is a small penalty in evaluating nixpkgs
a second time, since every moving of the fix point retriggers a complete evalutation.
But since this decision is made in the interest of balancing enacting trade-offs, this appears to be cost-effective in accordance with the overall optimization goals of Standard.
Beware: This is an opinionated pattern.
However, it helps to structure collaboration on micro-services with Standard.
The 4 Layers of Packaging
The Problem
We have coded our application and now we want to package and run it.
So, let's just dockerfile the application, dockercompose it into a running thing and celebrate with pizza and beer.
But not so fast!
The statistics about supply chain attacks are alarming. Here are some references:
- Argon, an Aqua Security company, has found that software supply chain attacks grew by over 300% in 2021.
- Gartner predicts that by 2025, 45% of organizations would have experienced a software supply chain attack.
- The FBI has reported a 62% increase in ransomware attacks from 2020 to 2021.
- A Cloudbees survey showed that 45% of enterprises have admitted that they’ve secured only half of their software supply chain.
So is the party over, before it even started?
Aggregating articles from Microsoft and ArsTechnica, we can find three broad doors of entry for supply chain attacks:
- Compromised build tools or update infrastructure
- Compromised dependencies through well (i.e. source) or cache (i.e. binary) poisoning
- Compromised identity signing a malicious app to bypass certificate-backed provenance checks
In this pattern piece, we employ a 20 year old, unique approach to packaging to shut close some of these doors of entry. This battle-tested approach denies a supply chain attacker their ability to compromise build tools and update infrastructure or to poison a cache. Alongside, we explore ways how teams can structure their collaboration on packaging a micro-service architecture with that technology.
The Team
Operator
The operator brings the application to production.
Production is a place, where she cannot allow an attacker to gain access to.
Therefore,
- she clears and protects the perimeter via "perimeter security" tactics,
- she secures transport via "zero trust",
- she encrypts secrets at rest and in flight,
- but, what if the very artifact that is being deployed is a trojan horse?
Many times, we've turned our eyes away from that big luring security hole because we thought it was practically impossible to mitigate it across a decently large software bill of material.
But, since 20 years, we actually can!
In order to prevent an attacker entering in such a way, the operator has to secure her supply chain in close collaboration with our other participant.
Developer
The developer incrementally modifies the source code.
From time to time, these modifications need to be shipped to that production place.
Locally, on the developer machine, everything looks good and also the CI doesn't complain.
So, off we go!
However, there are a couple of guarantees that she struggles to give with confidence, such as:
- Have you verified all base images and analyzed them for their potential attack surface?
- Can you guarantee stable base images that will never change, once verified?
- Have you properly validated all upstream binary-distributed blobs yourself?
- Can you guarantee that all dependencies, system & language level, are verified and stable?
The Layers
The Standard layers of packaging are designed to bring both participants together around a framework that holds software supply chain security dear and attackers out.
By providing a shared mental model for flexible, yet structured collaboration, it successfully circumnavigates some of the pitfalls of the underlying technology.
flowchart TD
packaging([Packaging])
operable([Operable])
image([OCI-Image])
scheduler([Scheduler Chart])
packaging --> operable
operable --> image
image --> scheduler
click packaging href "#packaging-layer" "Jump to the packaging layer section"
click operable href "#operable-layer" "Jump to the operable layer section"
click image href "#oci-image-layer" "Jump to the OCI image layer section"
click scheduler href "#scheduler-chart-layer" "Jump to the scheduler chart layer section"
Packaging Layer
This layer builds the pristine executable application as written by the developer with the building tools of the developer.
However, to ensure the software supply chain requirements, these build instructions are run in the context of a very restricted build environment provided by Nix.
Nix has a vast ecosystem, which makes embedding these build instructions for most languages straight forward.
For many languages, the Nix ecosystem has already developped and vested golden packaging paths.
That means, that in most cases, we simply can put those ecosystem libraries to work.
Operable Layer
More often than not, born in the heat of chasing features, some aspects of the binary are not quite to liking and necessity of the operator.
We need a buffer zone to accommodate for that, yet with a clear perspective to backlog and polish it off later.
The operable layer is that buffer zone. It is a typically scripted wrapper around the binary which instruments the application for operation.
It is written in a language that both, developer and operator, are comfortable with.
The only viable perspective for this operable wrapper is to become as thin as possible.
Should unmet operational scenarios fatten it up, our participants would schedule a backlog grooming session to address the issue. In order to put that wrapper on diet, they would refactor missing capabilities from the operable into the binary.
OCI-Image Layer
But we wouldn't send a script over the wire to our production place, right?!?
Right. To distribute our artifacts, we adopt the OCI Container Specification.
In simple terms, it's a docker image, but that would be misleading terminology. And to mislead you here for the sake of convenience, doesn't fit our reinvigorated software supply chain mindset.
The OCI Distribution Specification, ensures that distribution, for all intents and purposes of the runtime, is an atomic transaction.
This is convenient, because there is little possibility to end up with a partial and corrupt, but technically executable target state.
It is also convenient, because it is the current de-facto industry standard.
This industry, however, presently discusses its well-understood toil on startup times. Such is the common practice, that very frequently a non-trivial amount of stale artifacts are shipped. Through time spent on transport, decompression, extraction and layer re-assembly, they contribute to a noticeable runtime setup latency.
Nix ensures, that only the bare minimum runtime dependencies are included in every OCI image. Optionally, static builds can be turned on to further dramatically reduce the effective size of the container archive.
And last but not least, recent initiatives at the intersection of both ecosystems strive to further develop cross-pollination of concepts and ideas.
For example, in the Nix ecosystem, massive dependency reuse through global build trees for 50k+ packages is the norm. This technique is also a promising approach to increase the effectiveness of the OCI layer deduplication by significant margin.
Scheduler Chart Layer
The final runtime configuration is data.
The scheduler chart layer provides example data that also satisfies the requirements of operational readiness.
This data may be amended according to the concrete operational scenario before being rendered and submitted to the scheduler for reconciliation.
This pattern piece does not proffer any particular tooling to do so.
Any configuration wrangler that has good support for functions and overrides is probably fair game.
One may use Nix, however, as the glue code that provides a globally homologous interface on the command line.
While Standard offers its TUI to that end, many operators may be also already familiar with the vanilla Nix CLI tooling.
Beware: This is an opinionated analysis.
However, it helps to reason about CI/CD with Standard. Standard developed the concept of self-configuring CI through the Standard Registry.
Overview
Let's look at Continuous Integration (CI) and Continuous Delivery (CD) from a 10000 feet flight height.
For our visual analysis, we use the Business Process Modeling Notation (BPMN 2.0).
For an overview of the notation follow this link. But don't worry, we'll also walk you through the diagram.
General Phases
There are four phases that further structure the process.
flowchart TD
linting([Linting])
building([Building])
deployment([Deployment])
probing([Probing & Attestation])
linting --> building
building --> deployment
deployment --> probing
click linting href "#linting-phase" "Jump to the linting phase section"
click building href "#building-phase" "Jump to the building phase section"
click deployment href "#deployment-phase" "Jump to the deployment phase section"
click probing href "#probing-attestation-phase" "Jump to the probing & attestation phase section"
To automate these, we make use of the [Standard Registry][glossary-registry] that holds all data needed to build the pipeline. Its is data JSON-serializable so that any CI tool or helper library can auto-generate the pipeline from it.
Linting Phase
The linting phase ensures that the code base is in good shape. This can involve verification of formatting, style and auto-generated code.
Typically, these are simple repository tasks that call tools to get the job done.
In the local development environment, you invoke these with your task runner of choice or even configure them in a pre-commit or pre-push hook.
In addition, a Standard-ready CI, runs them first to ensure a short time-to-first-feedback.
Building Phase
We rely on Nix's declarative build and dependency trees to set up an efficient, intelligently scheduled and reproducible build pipeline.
The Nix cache ensures that no single build is done twice, as long as the build inputs, for example source code or dependencies, do not change.
Since the full dependency tree is known beforehand, intelligent scheduling ensures that shared dependencies are built and cached first before dependent builds are enqueued.
These properties lay on the foundation of reproducible builds. Thereby, an evaluator predicts expected output hashes ahead-of-time by recursively hashing over all dependencies' own output hashes. Since that evaluation is cheap when compared to a full build, it is calculated before even starting the first build and is exploited for smart scheduling of the build queue.
An optimized build farm can make particular use of that ahead-of-time evaluation to further optimize overall build times.
The Standard Registry holds all the data in machine readable format that is required by such build farms.
Deployment Phase
The deployment phase renders service runtime configuration into manifests and pushes them to the API of a platform scheduler, such as Kubernetes.
All reconciliation of the desired application runtime state is then the responsibility of a control loop built or plugged into that scheduler.
Push vs Pull Workflows
The industry has diverging opinions about whether a deployment should be pull or push based.
A pull based workflow is initiated by the target infrastructure polling for changes to the manifest source in regular intervals.
The advantages of a pull based workflow include a reduced intrusion surface since any network connection will be strictly outgoing from the target infrastructure.
A push based workflow, on the other hand, starts with CI which triggers deployment based on a particular precondition being met.
Having the target infrastructure listening for incoming connections from the orchestrating CI is also the main disadvantage of the push based workflow, as it increases the intrusion surface. However, orchestrated workflows, as opposed to the pull-based choreographed ones, are usually easier to reason about and, thus, easier to maintain.
A Standard-ready CI can typically cover simple deployments that follow a trivial render-and-push logic.
For more advanced workflows and roll-out conditions, a suitable state machine is required.
Probing & Attestation Phase
The probing and attestation phase is highly situation specific. It cannot be adequately represented through Standard and requires an entirely different control loop.
During this phase a mix of short- & long-lived testing suites are run against a particular target environment, usually called "testing" or "staging".
Some of these suites can be automated in proper test scheduling frameworks, others are inherently manual.
Test suites may include the likes of:
- Penetration Testing
- Property-Based Testing and Fuzzing
- Monkey Testing
- Load and Soak Testing
- End2End Testing
- Benchmarking
- Smoke Testing
- Runtime and Code Auditing
In this pattern piece, we didn't cover the release process. But we'll follow up with a dedicated pattern piece shortly.
A minimal project template with docs!
Included Configuration
devshell
for your contribution environments!treefmt
for formatting all the things!mdbook
for making documentation part of your workflow!lefthook
for commit discipline and a clean history!- GitHub Setting App for configuring GitHub declaratively!
Bootstrap
# make a new empty project dir
mkdir my-project
cd my-project
# grab the template
nix flake init -t github:divnix/std#minimal
# see which values to change
grep -r --include=\*.nix 'CONFIGURE-ME' .
# do some inititialization
git init && git add .
# enter the devshell and effectuate repo configuration
direnv allow
git add . && git commit -m "feat: initial commit"
Standard, and Nix and Rust, oh my!
This template uses Nix to create a sane development shell for Rust projects, Standard for keeping your Nix code well organized, Fenix for pulling the latest rust binaries via Nix, and Crane for building Rust projects in Nix incrementally, making quick iteration a breeze.
Rust Analyzer is also wired up properly for immediate use from a terminal based editor with language server support. Need one with stellar Nix and Rust support? Try Helix!
Bootstrap
# make a new empty project dir
mkdir my-project
cd my-project
# grab the template
nix flake init -t github:divnix/std#rust
# do some inititialization
git init && git add .
# enter the devshell
direnv allow || nix develop
# continue some inititialization
cargo init # pass --lib for library projects
cargo build # to generate Cargo.lock
git add . && git commit -m "init"
TUI/CLI
TUI/CLI:
# TUI
std
# CLI
std //<TAB>
std re-cache # refresh the CLI cache
std list # show a list of all targets
# Version
std -v
Help:
❯ std -h
std is the CLI / TUI companion for Standard.
- Invoke without any arguments to start the TUI.
- Invoke with a target spec and action to run a known target's action directly.
Usage:
std //[cell]/[block]/[target]:[action] [args...]
std [command]
Available Commands:
list List available targets.
re-cache Refresh the CLI cache.
Flags:
-h, --help help for std
-v, --version version for std
Use "std [command] --help" for more information about a command.
Conventions in std
In principle, we all want to be able to read code with local reasoning.
However, these few conventions are pure quality of life and help us to keep our nix code organized.
Nix File Locations
Nix files are imported from either of these two locations, if present, in this order of precedence:
${cellsFrom}/${cell}/${block}.nix
${cellsFrom}/${cell}/${block}/default.nix
Readme File Locations
Readme files are picked up by the TUI in the following places:
${cellsFrom}/${cell}/Readme.md
${cellsFrom}/${cell}/${block}.md
${cellsFrom}/${cell}/${block}/Readme.md
${cellsFrom}/${cell}/${block}/${target}.md
Cell Block File Arguments
Each Cell Block is a function and expects the following standardized interface for interoperability:
{ inputs, cell }: {}
The inputs
argument
The inputs
argument holds all the de-systemized flake inputs plus a few special inputs:
{
inputs = {
self = {}; # sourceInfo of the current repository
nixpkgs = {}; # an _instantiated_ nixpkgs
cells = {}; # the other cells in this repo
};
}
The cell
argument
The cell
argument holds all the different Cell Block targets of the current cell.
This is the main mechanism by which code organization and separation of concern is enabled.
The deSytemize
d inputs
All inputs are scoped for the current system, that is derived from the systems
input list to std.grow
.
That means contrary to the usual nix-UX, in most cases, you don't need to worry about system
.
The current system will be "lifted up" one level, while still providing full access to all systems
for
cross-compilation scenarios.
# inputs.a.packages.${system}
{
inputs.a.packages.pkg1 = {};
inputs.a.packages.pkg2 = {};
/* ... */
inputs.a.packages.${system}.pkgs1 = {};
inputs.a.packages.${system}.pkgs2 = {};
/* ... */
}
Top-level system
-scoping of outputs
Contrary to the upstream flake schema, all outputs are system
spaced at the top-level.
This allows us to uniformly select on the current system and forget about it for most
of the time.
Sometimes nix
evaluations don't strictly depend on a particular system
, and scoping
them seems counter-intuitive. But due to the fact that function calls are memoized, there
is never a penalty in actually scoping them. So for the sake of uniformity, we scope them
anyways.
The outputs therefore abide by the following "schema":
{
${system}.${cell}.${block}.${target} = {};
}
Deprecations
{inputs}: time: body: let
l = inputs.nixpkgs.lib // builtins;
ansi = import ./ansi.nix;
pad = s: let
n = 17;
prefix = l.concatStringsSep "" (l.genList (_: " ") (n - (l.stringLength s)));
in
prefix + s;
indent = s: let
n = 5;
prefix = l.concatStringsSep "" (l.genList (_: " ") n);
lines = l.splitString "\n" s;
in
" 📝 │ " + (l.concatStringsSep "\n${prefix}│ " lines);
warn = let
apply =
l.replaceStrings
(map (key: "{${key}}") (l.attrNames ansi))
(l.attrValues ansi);
in
msg:
l.trace (apply "🔥 {bold}{196}Standard Deprecation Notices - {220}run `std check' to show!{reset}")
l.traceVerbose (apply "\n{202}${msg}{reset}");
in
warn ''
─────┬─────────────────────────────────────────────────────────────────────────
💪 │ {bold}Action Required !{un-bold}
─────┼─────────────────────────────────────────────────────────────────────────
{italic}${indent body}{un-italic}
─────┼─────────────────────────────────────────────────────────────────────────
📅 │ {bold}Scheduled Removal: ${pad time}{un-bold}
─────┴─────────────────────────────────────────────────────────────────────────
''
Please observe the following deprecations and their deprecation schedule:
inputs: let
removeBy = import ./cells/std/errors/removeBy.nix {inherit inputs;};
in {
}
Builtin Block Types
A few Block Types are packaged with std
.
In practical terms, Block Types distinguish themselves through the actions they provide to a particular Cell Block.
It is entirely possible to define custom Block Types with custom Actions according to the needs of your project.
Arion
{root}:
/*
Use the arion for arionCompose Jobs - https://docs.hercules-ci.com/arion/
Available actions:
- up
- ps
- stop
- rm
- config
- arion
*/
let
inherit (root) mkCommand;
in
name: {
inherit name;
type = "arion";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
pkgs = inputs.nixpkgs.${currentSystem};
cmd = "arion --prebuilt-file ${target.config.out.dockerComposeYaml}";
in [
(mkCommand currentSystem "up" "arion up" [pkgs.arion] ''${cmd} up "$@" '' {})
(mkCommand currentSystem "ps" "exec this arion task to ps" [pkgs.arion] ''${cmd} ps "$@" '' {})
(mkCommand currentSystem "stop" "arion stop" [pkgs.arion] ''${cmd} stop "$@" '' {})
(mkCommand currentSystem "rm" "arion rm" [pkgs.arion] ''${cmd} rm "$@" '' {})
(mkCommand currentSystem "config" "check the docker-compose yaml file" [pkgs.arion] ''${cmd} config "$@" '' {})
(mkCommand currentSystem "arion" "pass any command to arion" [pkgs.arion] ''${cmd} "$@" '' {})
];
}
Runnables (todo: vs installables)
{
root,
super,
}:
/*
Use the Runnables Blocktype for targets that you want to
make accessible with a 'run' action on the TUI.
*/
let
inherit (root) mkCommand actions;
inherit (super) addSelectorFunctor;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "runnables";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: [
(actions.build currentSystem target)
(actions.run currentSystem target)
];
}
Installables (todo: vs runnables)
{
root,
super,
nixpkgs,
}:
/*
Use the Installables Blocktype for targets that you want to
make availabe for installation into the user's nix profile.
Available actions:
- install
- upgrade
- remove
- build
- bundle
- bundleImage
- bundleAppImage
*/
let
inherit (root) mkCommand actions;
inherit (super) addSelectorFunctor;
l = nixpkgs.lib // builtins;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "installables";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
escapedFragment = l.escapeShellArg fragment;
in [
(actions.build currentSystem target)
# profile commands require a flake ref
(mkCommand currentSystem "install" "install this target" [] ''
# ${target}
set -x
nix profile install "$PRJ_ROOT#"${escapedFragment}
'' {})
(mkCommand currentSystem "upgrade" "upgrade this target" [] ''
# ${target}
set -x
nix profile upgrade "$PRJ_ROOT#"${escapedFragment}
'' {})
(mkCommand currentSystem "remove" "remove this target" [] ''
# ${target}
set -x
nix profile remove "$PRJ_ROOT#"${escapedFragment}
'' {})
# TODO: use target. `nix bundle` requires a flake ref, but we may be able to use nix-bundle instead as a workaround
(mkCommand currentSystem "bundle" "bundle this target" [] ''
# ${target}
set -x
nix bundle --bundler github:Ninlives/relocatable.nix --refresh "$PRJ_ROOT#"${escapedFragment}
'' {})
(mkCommand currentSystem "bundleImage" "bundle this target to image" [] ''
# ${target}
set -x
nix bundle --bundler github:NixOS/bundlers#toDockerImage --refresh "$PRJ_ROOT#"${escapedFragment}
'' {})
(mkCommand currentSystem "bundleAppImage" "bundle this target to AppImage" [] ''
# ${target}
set -x
nix bundle --bundler github:ralismark/nix-appimage --refresh "$PRJ_ROOT#"${escapedFragment}
'' {})
];
}
Pkgs
_:
/*
Use the Pkgs Blocktype if you need to construct your custom
variant of nixpkgs with overlays.
Targets will be excluded from the CLI / TUI and thus not
slow them down.
*/
name: {
inherit name;
type = "pkgs";
cli = false; # its special power
}
Devshells
{
root,
super,
}:
/*
Use the Devshells Blocktype for devShells.
Available actions:
- build
- enter
*/
let
inherit (root) mkCommand actions devshellDrv;
inherit (super) addSelectorFunctor;
inherit (builtins) unsafeDiscardStringContext;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "devshells";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
developDrv = devshellDrv target;
in [
(actions.build currentSystem target)
(mkCommand currentSystem "enter" "enter this devshell" [] ''
profile_path="$PRJ_DATA_HOME/${fragmentRelPath}"
mkdir -p "$profile_path"
# ${developDrv}
nix_args=(
"${unsafeDiscardStringContext developDrv.drvPath}"
"--no-update-lock-file"
"--no-write-lock-file"
"--no-warn-dirty"
"--accept-flake-config"
"--no-link"
"--build-poll-interval" "0"
"--builders-use-substitutes"
)
nix build "''${nix_args[@]}" --profile "$profile_path/shell-profile"
_SHELL="$SHELL"
eval "$(nix print-dev-env ${developDrv})"
SHELL="$_SHELL"
if ! [[ -v STD_DIRENV ]]; then
if declare -F __devshell-motd &>/dev/null; then
__devshell-motd
fi
exec $SHELL -i
fi
'' {})
];
}
Nixago
{root}:
/*
Use the Nixago Blocktype for nixago pebbles.
Use Nixago pebbles to ensure files are present
or symlinked into your repository. You may typically
use this for repo dotfiles.
For more information, see: https://github.com/nix-community/nixago.
Available actions:
- ensure
- explore
*/
let
inherit (root) mkCommand;
in
name: {
inherit name;
type = "nixago";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
pkgs = inputs.nixpkgs.${currentSystem};
in [
(mkCommand currentSystem "populate" "populate this nixago file into the repo" [] ''
${target.install}/bin/nixago_shell_hook
'' {})
(mkCommand currentSystem "explore" "interactively explore the nixago file" [pkgs.bat] ''
bat "${target.configFile}"
'' {})
];
}
Containers
{
trivial,
root,
super,
}:
/*
Use the Containers Blocktype for OCI-images built with nix2container.
Available actions:
- print-image
- publish
- load
*/
let
inherit (root) mkCommand actions;
inherit (super) addSelectorFunctor;
inherit (builtins) readFile toFile;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "containers";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
inherit (inputs.n2c.packages.${currentSystem}) skopeo-nix2container;
triv = trivial.${currentSystem};
proviso = ./containers-proviso.sh;
tags' =
builtins.toFile "${target.name}-tags.json" (builtins.concatStringsSep "\n" target.image.tags);
copyFn = ''
copy() {
local uri prev_tag
uri=$1
shift
for tag in $(<${tags'}); do
if ! [[ -v prev_tag ]]; then
skopeo --insecure-policy copy nix:${target} "$uri:$tag" "$@"
else
# speedup: copy from the previous tag to avoid superflous network bandwidth
skopeo --insecure-policy copy "$uri:$prev_tag" "$uri:$tag" "$@"
fi
echo "Done: $uri:$tag"
prev_tag="$tag"
done
}
'';
in [
(actions.build currentSystem target)
(mkCommand currentSystem "print-image" "print out the image.repo with all tags" [] ''
echo
for tag in $(<${tags'}); do
echo "${target.image.repo}:$tag"
done
'' {})
(mkCommand currentSystem "publish" "copy the image to its remote registry" [skopeo-nix2container] ''
${copyFn}
copy docker://${target.image.repo}
'' {
meta.image = target.image.name;
inherit proviso;
})
(mkCommand currentSystem "load" "load image to the local docker daemon" [skopeo-nix2container] ''
${copyFn}
if command -v podman &> /dev/null; then
echo "Podman detected: copy to local podman"
copy containers-storage:${target.image.repo} "$@"
fi
if command -v docker &> /dev/null; then
echo "Docker detected: copy to local docker"
copy docker-daemon:${target.image.repo} "$@"
fi
'' {})
];
}
Terra
Block type for managing Terranix configuration for Terraform.
{
root,
super,
}:
/*
Use the Terra Blocktype for terraform configurations managed by terranix.
Important! You need to specify the state repo on the blocktype, e.g.:
[
(terra "infra" "[email protected]:myorg/myrepo.git")
]
Available actions:
- init
- plan
- apply
- state
- refresh
- destroy
*/
let
inherit (root) mkCommand;
inherit (super) addSelectorFunctor postDiffToGitHubSnippet;
in
name: repo: {
inherit name;
__functor = addSelectorFunctor;
type = "terra";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
inherit (inputs) terranix;
pkgs = inputs.nixpkgs.${currentSystem};
git = {
inherit repo;
ref = "main";
state = fragmentRelPath + "/state.json";
};
terraEval = import (terranix + /core/default.nix);
terraformConfiguration = builtins.toFile "config.tf.json" (builtins.toJSON
(terraEval {
inherit pkgs; # only effectively required for `pkgs.lib`
terranix_config = {
_file = fragmentRelPath;
imports = [target];
};
strip_nulls = true;
})
.config);
setup = ''
export TF_VAR_fragment=${pkgs.lib.strings.escapeShellArg fragment}
export TF_VAR_fragmentRelPath=${fragmentRelPath}
export TF_IN_AUTOMATION=1
export TF_DATA_DIR="$PRJ_DATA_HOME/${fragmentRelPath}"
export TF_PLUGIN_CACHE_DIR="$PRJ_CACHE_HOME/tf-plugin-cache"
mkdir -p "$TF_DATA_DIR"
mkdir -p "$TF_PLUGIN_CACHE_DIR"
dir="$PRJ_ROOT/.tf/${fragmentRelPath}/.tf"
mkdir -p "$dir"
cat << MESSAGE > "$dir/readme.md"
This is a tf staging area.
It is motivated by the terraform CLI requiring to be executed in a staging area.
MESSAGE
if [[ -e "$dir/config.tf.json" ]]; then rm -f "$dir/config.tf.json"; fi
jq '.' ${terraformConfiguration} > "$dir/config.tf.json"
'';
wrap = cmd: ''
${setup}
# Run the command and capture output
terraform-backend-git git \
--dir "$dir" \
--repository ${git.repo} \
--ref ${git.ref} \
--state ${git.state} \
terraform ${cmd} "$@" \
${pkgs.lib.optionalString (cmd == "plan") ''
-lock=false -no-color | tee "$PRJ_CACHE_HOME/tf.console.txt"
''}
# Pass output to the snippet
${pkgs.lib.optionalString (cmd == "plan") ''
output=$(cat "$PRJ_CACHE_HOME/tf.console.txt")
summary_plan=$(tac "$PRJ_CACHE_HOME/tf.console.txt" | grep -m 1 -E '^(Error:|Plan:|Apply complete!|No changes.|Success)' | tac || echo "View output.")
summary="<code>std ${fragmentRelPath}:${cmd}</code>: $summary_plan"
${postDiffToGitHubSnippet "${fragmentRelPath}:${cmd}" "$output" "$summary"}
''}
'';
in [
(mkCommand currentSystem "init" "tf init" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "init") {})
(mkCommand currentSystem "plan" "tf plan" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "plan") {})
(mkCommand currentSystem "apply" "tf apply" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "apply") {})
(mkCommand currentSystem "state" "tf state" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "state") {})
(mkCommand currentSystem "refresh" "tf refresh" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "refresh") {})
(mkCommand currentSystem "destroy" "tf destroy" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "destroy") {})
(mkCommand currentSystem "terraform" "pass any command to terraform" [pkgs.jq pkgs.terraform pkgs.terraform-backend-git] (wrap "") {})
];
}
Data
{
trivial,
root,
}:
/*
Use the Data Blocktype for json serializable data.
Available actions:
- write
- explore
For all actions is true:
Nix-proper 'stringContext'-carried dependency will be realized
to the store, if present.
*/
let
inherit (root) mkCommand;
inherit (builtins) toJSON concatStringsSep;
in
name: {
inherit name;
type = "data";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
inherit (inputs.nixpkgs.${currentSystem}) pkgs;
triv = trivial.${currentSystem};
# if target ? __std_data_wrapper, then we need to unpack from `.data`
json = triv.writeTextFile {
name = "data.json";
text = toJSON (
if target ? __std_data_wrapper
then target.data
else target
);
};
in [
(mkCommand currentSystem "write" "write to file" [] "echo ${json}" {})
(mkCommand currentSystem "explore" "interactively explore" [pkgs.fx] (
concatStringsSep "\t" ["fx" json]
) {})
];
}
Functions
_:
/*
Use the Functions Blocktype for reusable nix functions that you would
call elswhere in the code.
Also use this for all types of modules and profiles, since they are
implemented as functions.
Consequently, there are no actions available for functions.
*/
name: {
inherit name;
type = "functions";
}
Anything
Note: while the implementation is the same as functions
, the semantics are different. Implementations may diverge in the future.
_:
/*
Use the Anything Blocktype as a fallback.
It doesn't have actions.
*/
name: {
inherit name;
type = "anything";
}
Kubectl
Block type for rendering deployment manifests for the Kubernetes Cluster scheduler. Each named attribtute-set under the block contains a set of deployment manifests.
{
trivial,
root,
super,
dmerge,
}:
/*
Use the `kubectl` Blocktype for rendering deployment manifests
for the Kubernetes Cluster scheduler. Each named attribtute-set under the
block contains a set of deployment manifests.
Available actions:
- render
- deploy
- explore
*/
let
inherit (root) mkCommand;
inherit (super) addSelectorFunctor askUserToProceedSnippet postDiffToGitHubSnippet;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "kubectl";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
inherit (inputs.nixpkgs) lib;
pkgs = inputs.nixpkgs.${currentSystem};
triv = trivial.${currentSystem};
manifest_path = fragmentRelPath;
checkedRev = inputs.std.std.errors.bailOnDirty ''
Will not render manifests from a dirty tree.
Otherwise we cannot keep good track of deployment history.''
inputs.self.rev;
usesKustomize = target ? kustomization || target ? Kustomization;
augment = let
amendIfExists = path: rhs: manifest:
if true == lib.hasAttrByPath path manifest
then amendAlways rhs manifest
else manifest;
amendAlways = rhs: manifest: dmerge manifest rhs;
in
target:
lib.mapAttrs (
key:
lib.flip lib.pipe [
# metadata
(
manifest:
if manifest ? metadata.labels && manifest.metadata.labels == null
then lib.recursiveUpdate manifest {metadata.labels = {};}
else manifest
)
(
amendIfExists ["metadata"]
{
metadata.labels."app.kubernetes.io/version" = checkedRev;
metadata.labels."app.kubernetes.io/managed-by" = "std-kubectl";
}
)
(
if usesKustomize && (key == "kustomization" || key == "Kustomization")
# ensure a kustomization picks up the preprocessed resources
then
(manifest:
manifest
// {
resources =
map
(n: "${n}.json")
(builtins.attrNames (builtins.removeAttrs target ["meta" "Kustomization" "kustomization"]));
})
else lib.id
)
]
) (builtins.removeAttrs target ["meta"]);
generateManifests = target: let
writeManifest = name: manifest:
builtins.toFile name (builtins.unsafeDiscardStringContext (builtins.toJSON manifest));
renderManifests = lib.mapAttrsToList (name: manifest: ''
cp ${writeManifest name manifest} ${
if name == "kustomization" || name == "Kustomization"
then "Kustomization"
else "${name}.json"
}
'');
in
triv.runCommandLocal "generate-k8s-manifests" {} ''
mkdir -p $out
cd $out
${lib.concatStrings (renderManifests (augment target))}
'';
build = ''
declare manifest_path="$PRJ_DATA_HOME/${manifest_path}"
build() {
echo "Buiding manifests..."
echo
rm -rf "$manifest_path"
mkdir -p "$(dirname "$manifest_path")"
ln -s "${generateManifests target}" "$manifest_path"
echo "Manifests built in: $manifest_path"
}
'';
in [
/*
The `render` action will take this Nix manifest descrition, convert it to JSON,
inject the git revision validate the manifest, after which it can be run or
planned with the kubectl cli or the `deploy` action.
*/
(mkCommand currentSystem "render" "Build the JSON manifests" [] ''
${build}
build
'' {})
(mkCommand currentSystem "diff" "Diff the manifests against the cluster" [pkgs.kubectl pkgs.icdiff] ''
${build}
build
diff() {
kubectl diff ${
if usesKustomize
then "--kustomize"
else "--recursive --filename"
} "$manifest_path/";
}
${postDiffToGitHubSnippet "${fragmentRelPath}:diff" "$(diff || true)" "<code>std ${fragmentRelPath}:diff</code>"}
KUBECTL_EXTERNAL_DIFF="icdiff -N -r"
export KUBECTL_EXTERNAL_DIFF
diff
'' {})
(mkCommand currentSystem "apply" "Apply the manifests to K8s" [pkgs.kubectl pkgs.icdiff] ''
${build}
build
KUBECTL_EXTERNAL_DIFF="icdiff -N -r"
export KUBECTL_EXTERNAL_DIFF
diff() {
kubectl diff --server-side=true --field-manager="std-action-$(whoami)" ${
if usesKustomize
then "--kustomize"
else "--recursive --filename"
} "$manifest_path/";
return $?;
}
run() {
kubectl apply --server-side=true --field-manager="std-action-$(whoami)" ${
if usesKustomize
then "--kustomize"
else "--recursive --filename"
} "$manifest_path/";
}
diff
ret=$?
if [[ $ret == 0 ]] || [[ $ret == 1 ]]; then
${askUserToProceedSnippet "apply" "run"}
fi
'' {})
(mkCommand currentSystem "explore" "Interactively explore the manifests" [pkgs.fx] ''
fx ${
builtins.toFile "explore-k8s-manifests.json"
(builtins.unsafeDiscardStringContext (builtins.toJSON (augment target)))
}
'' {})
];
}
Files (todo: vs data)
{root}:
/*
Use the Files Blocktype for any text data.
Available actions:
- explore
*/
let
inherit (root) mkCommand;
in
name: {
inherit name;
type = "files";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
file = toString target;
pkgs = inputs.nixpkgs.${currentSystem};
in [
(mkCommand currentSystem "explore" "interactively explore with bat" [pkgs.bat] ''
bat ${file}
'' {})
];
}
Microvms
Block type for managing microvm.nix configuration for declaring lightweight NixOS virtual machines.
{root}:
/*
Use the Microvms Blocktype for Microvm.nix - https://github.com/astro/microvm.nix
Available actions:
- run
- console
- microvm
*/
let
inherit (root) mkCommand;
in
name: {
inherit name;
type = "microvms";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: [
(mkCommand currentSystem "run" "run the microvm" [] ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-run
'' {})
(mkCommand currentSystem "console" "enter the microvm console" [] ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-console
'' {})
(mkCommand currentSystem "microvm" "pass any command to microvm" [] ''
${target.config.microvm.runner.${target.config.microvm.hypervisor}}/bin/microvm-"$@"
'' {})
];
}
Namaka
Block type for declaring Namaka snapshot tests.
{
root,
super,
}: let
inherit (root) mkCommand;
inherit (super) addSelectorFunctor;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "namaka";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
pkg = inputs.namaka.packages.${currentSystem}.default;
subdir = target.snap-dir or "";
in [
(mkCommand currentSystem "eval" "use transparently with namaka cli" [] ''
nix eval '.#${fragment}'
'' {})
(mkCommand currentSystem "check" "run namaka tests against snapshots" [pkg] ''
namaka ${subdir} check -c nix eval '.#${fragment}'
'' {})
(mkCommand currentSystem "review" "review pending namaka checks" [pkg] ''
namaka ${subdir} review -c nix eval '.#${fragment}'
'' {})
(mkCommand currentSystem "clean" "clean up pending namaka checks" [pkg] ''
namaka ${subdir} clean -c nix eval '.#${fragment}'
'' {})
];
}
Nixostests
Block type for declaring VM-based tests for NixOS.
{
root,
super,
}:
/*
Use the NixosTests Blocktype in order to instrucement nixos
vm-based test inside your reporisory.
Available actions:
- run
- run-vm
- audit-script
- run-vm-+
*/
let
inherit (root) mkCommand actions;
inherit (super) addSelectorFunctor;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "nixostests";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
pkgs = inputs.nixpkgs.${currentSystem};
inherit (pkgs) lib;
inherit (pkgs.stdenv) isLinux;
in
[
(mkCommand currentSystem "run" "run tests in headless vm" [] ''
# ${target.driver}
${target.driver}/bin/nixos-test-driver
'' {})
(mkCommand currentSystem "audit-script" "audit the test script" [pkgs.bat] ''
# ${target.driver}
bat --language py ${target.driver}/test-script
'' {})
(mkCommand currentSystem "run-vm" "run tests interactively in vm" [] ''
# ${target.driverInteractive}
${target.driverInteractive}/bin/nixos-test-driver
'' {})
(mkCommand currentSystem "run-vm+" "run tests with state from last run" [] ''
# ${target.driverInteractive}
${target.driverInteractive}/bin/nixos-test-driver --keep-vm-state
'' {})
]
++ lib.optionals isLinux [
(mkCommand currentSystem "iptables+" "setup nat redirect 80->8080 & 443->4433" [pkgs.iptables] ''
sudo iptables \
--table nat \
--insert OUTPUT \
--proto tcp \
--destination 127.0.0.1 \
--dport 443 \
--jump REDIRECT \
--to-ports 4433
sudo iptables \
--table nat \
--insert OUTPUT \
--proto tcp \
--destination 127.0.0.1 \
--dport 80 \
--jump REDIRECT \
--to-ports 8080
'' {})
(mkCommand currentSystem "iptables-" "remove nat redirect 80->8080 & 443->4433" [pkgs.iptables] ''
sudo iptables \
--table nat \
--delete OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 4433
sudo iptables \
--table nat \
--delete OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
'' {})
];
}
Nomad
Block type for rendering job descriptions for the Nomad Cluster scheduler.
{
nixpkgs,
root,
super,
}:
/*
Use the `nomad` Block Type for rendering job descriptions
for the Nomad Cluster scheduler. Each named attribtute-set under the
block contains a valid Nomad job description, written in Nix.
Available actions:
- render
- deploy
- explore
*/
let
inherit (root) mkCommand;
inherit (super) addSelectorFunctor askUserToProceedSnippet;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "nomadJobManifests";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
inherit (nixpkgs) lib;
pkgs = inputs.nixpkgs.${currentSystem};
job_name = baseNameOf fragmentRelPath;
job_path = "${dirOf fragmentRelPath}/${job_name}.json";
jobWithGitRevision = target: let
checkedRev = inputs.std.std.errors.bailOnDirty ''
Will not render jobs from a dirty tree.
Otherwise we cannot keep good track of deployment history.''
inputs.self.rev;
job = builtins.mapAttrs (_: v: lib.recursiveUpdate v {meta.rev = checkedRev;}) target.job;
in
builtins.toFile "${job_name}.json" (builtins.unsafeDiscardStringContext (builtins.toJSON {inherit job;}));
render = ''
declare job_path="$PRJ_DATA_HOME/${job_path}"
render() {
echo "Rendering to $job_path..."
rm -rf "$job_path"
ln -s "${jobWithGitRevision target}" "$job_path"
if status=$(nomad validate "$job_path"); then
echo "$status for $job_path"
fi
}
'';
in [
/*
The `render` action will take this Nix job descrition, convert it to JSON,
inject the git revision validate the manifest, after which it can be run or
planned with the Nomad cli or the `deploy` action.
*/
(mkCommand currentSystem "render" "build the JSON job description" [pkgs.nomad] ''
${render}
render
'' {})
(mkCommand currentSystem "deploy" "Deploy the job to Nomad" [pkgs.nomad pkgs.jq] ''
${render}
render
if ! plan_results=$(nomad plan -force-color "$job_path"); then
echo "$plan_results"
run() { echo "$plan_results" | grep 'nomad job run -check-index'; }
${askUserToProceedSnippet "deploy" "run"}
else
echo "Job hasn't changed since last deployment, nothing to deploy"
fi
'' {})
(mkCommand currentSystem "explore" "interactively explore the Job defintion" [pkgs.nomad pkgs.fx] ''
${render}
render
fx "$job_path"
'' {})
];
}
Nvfetcher
Block type for managing nvfetcher configuration for updating package definition sources.
{
root,
super,
}:
/*
Use the nvfetcher Blocktype in order to generate package sources
with nvfetcher. See its docs for more details.
Available actions:
- fetch
*/
let
inherit (root) mkCommand actions;
inherit (super) addSelectorFunctor;
in
name: {
__functor = addSelectorFunctor;
inherit name;
type = "nvfetcher";
actions = {
currentSystem,
fragment,
fragmentRelPath,
target,
inputs,
}: let
pkgs = inputs.nixpkgs.${currentSystem};
inherit (pkgs) lib;
inherit (pkgs.stdenv) isLinux;
in [
(mkCommand currentSystem "fetch" "update source" [pkgs.nvfetcher] ''
targetname="$(basename ${fragmentRelPath})"
blockpath="$(dirname ${fragmentRelPath})"
cellpath="$(dirname "$blockpath")"
tmpfile="$(mktemp)"
updates="$PRJ_ROOT/${fragmentRelPath}.md"
nvfetcher \
--config "$PRJ_ROOT/$cellpath/nvfetcher.toml" \
--build-dir "$PRJ_ROOT/$blockpath" \
--changelog "$tmpfile" \
--filter "^$targetname$"
sed -i '''' -e "s|^|- \`$(date --iso-8601=m)\` |" "$tmpfile"
cat "$tmpfile" >> "$updates"
'' {})
];
}
Autogenerated documentation from ./src/lib/*
.
Cell: lib
The Standard Library
This library intends to cover the Software Delivery Life Cycle in the Standard way.
Each Cell Block covers a specific SDLC topic.
Block: dev
The Dev Library
This library covers development aspects of the SDLC.
Target: mkArion
No description
mkArion
This is a transparent convenience proxy for hercules-ci/arion
’s lib.build
function.
However, the arion’s nixos
config option was removed.
As Standard claims to be the integration layer it will not delegate integration via a foreign interface to commissioned tools, such as arion.
This is a bridge towards and from docker-compose users. Making nixos part of the interface would likely alienate that bridge for those users.
If you need a nixos-based container image, please check out the arion source code on how it’s done.
Target: mkMakes
No description
mkMakes
… provides an interface to makes
tasks
This is an integration for fluidattacks/makes
.
A version that has this patch is a prerequisite.
Usage example
{
inputs,
cell,
}: let
inherit (inputs.std.lib) dev;
in {
task = ops.mkMakes ./path/to/make/task//main.nix {};
}
Some refactoring of the tasks may be necessary. Let the error messages be your friend.
Target: mkNixago
No description
mkNixago
This is a transparent convenience proxy for nix-community/nixago
’s lib.${system}.make
function.
It is enriched with a forward contract towards std
enriched mkShell
implementation.
In order to define numtide/devshell
’s commands
& packages
alongside the Nixago pebble,
just add the following attrset to the Nixago spec. It will be picked up automatically by mkShell
when that pebble
is used inside its config.nixago
-option.
{ inputs, cell }: {
foo = inputs.std.lib.dev.mkNixago {
/* ... */
packages = [ /* ... */ ];
commands = [ /* ... */ ];
devshell = { /* ... */ }; # e.g. for startup hooks
};
}
Target: mkShell
No description
mkShell
This is a transparent convenience proxy for numtide/devshell
’s mkShell
function.
It is enriched with a tight integration for std
Nixago pebbles:
{ inputs, cell}: {
default = inputs.std.lib.dev.mkShell {
/* ... */
nixago = [
(cell.nixago.foo {
data.qux = "xyz";
packages = [ pkgs.additional-package ];
})
cell.nixago.bar
cell.nixago.quz
];
};
}
Note, that you can extend any Nixago Pebble at the calling site via a built-in functor like in the example above.
Block: ops
The Ops Library
This library covers operational aspects of the SDLC.
Target: mkMicrovm
No description
mkMicrovm
… provides an interface to microvm
tasks
This is an integration for astro/microvm.nix
.
Usage example
{
inputs,
cell,
}: let
inherit (inputs.std.lib) ops;
in {
# microvm <module>
myhost = ops.mkMicrovm ({ pkgs, lib, ... }: { networking.hostName = "microvms-host";});
}
Target: mkOCI
No description
mkOCI
… is a function to generate an OCI Image via nix2container
.
The function signature is as follows:
Creates an OCI container image
Args:
name: The name of the image.
entrypoint: The entrypoint of the image. Must be a derivation.
tag: Optional tag of the image (defaults to output hash)
setup: A list of setup tasks to run to configure the container.
uid: The user ID to run the container as.
gid: The group ID to run the container as.
perms: A list of permissions to set for the container.
labels: An attribute set of labels to set for the container. The keys are
automatically prefixed with "org.opencontainers.image".
config: Additional options to pass to nix2container.buildImage's config.
options: Additional options to pass to nix2container.buildImage.
Returns:
An OCI container image (created with nix2container).
*/
Target: mkOperable
No description
mkOperable
… is a function interface into the second layer of packaging of the Standard SDLC Packaging pattern.
It’s purpose is to provide an easy way to enrich a “package” into an “operable”.
The function signature is as follows:
Args:
package: The package to wrap.
runtimeScript: A bash script to run at runtime.
runtimeEnv: An attribute set of environment variables to set at runtime.
runtimeInputs: A list of packages to add to the runtime environment.
runtimeShell: The runtime shell. Defaults to bash.
debugInputs: A list of packages available in the debug shell.
livenessProbe: An optional derivation to run to check if the program is alive.
readinessProbe: An optional derivation to run to check if the program is ready.
Returns:
An operable for the given package.
*/
{
package,
Target: mkStandardOCI
No description
mkStandardOCI
… is a function interface into the third layer of packaging of the Standard SDLC Packaging pattern.
It produces a Standard OCI Image from an “operable”.
The function signature is as follows:
Creates an OCI container image using the given operable.
Args:
name: The name of the image.
operable: The operable to wrap in the image.
tag: Optional tag of the image (defaults to output hash)
setup: A list of setup tasks to run to configure the container.
uid: The user ID to run the container as.
gid: The group ID to run the container as.
perms: A list of permissions to set for the container.
labels: An attribute set of labels to set for the container. The keys are
automatically prefixed with "org.opencontainers.image".
debug: Whether to include debug tools in the container (coreutils).
config: Additional options to pass to nix2container.buildImage's config.
options: Additional options to pass to nix2container.
Returns:
An OCI container image (created with nix2container).
*/
The Standard Image
Standard images are minimal and hardened. They only contain required dependencies.
Contracts
The following contracts can be consumed:
/bin/entrypoint # always present
/bin/runtime # always present, drops into the runtime environment
/bin/live # if livenessProbe was set
/bin/ready # if readinessProbe was set
That’s it. There is nothing more to see.
All other dependencies are contained in /nix/store/...
.
The Debug Image
Debug Images wrap the standard images and provide additional debugging packages.
Hence, they are neither minimal, nor hardened because of the debugging packages’ added surface.
Contracts
The following contracts can be consumed:
/bin/entrypoint # always present
/bin/runtime # always present, drops into the runtime environment
/bin/debug # always present, drops into the debugging environment
/bin/live # if livenessProbe was set
/bin/ready # if readinessProbe was set
How to extend?
A Standard or Debug Image doesn’t have a package manager available in the environment.
Hence, to extend the image you have two options:
Nix-based extension
rec {
upstream = n2c.pullImage {
imageName = "docker.io/my-upstream-image";
imageDigest = "sha256:fffff.....";
sha256 = "sha256-ffffff...";
};
modified = n2c.buildImage {
name = "docker.io/my-modified-image";
fromImage = upstream;
contents = [nixpkgs.bashInteractive];
};
}
Dockerfile-based extension
FROM alpine AS builder
RUN apk --no-cache curl
FROM docker.io/my-upstream-image
COPY --from=builder /... /
Please refer to the official dockerfile documentation for more details.
Target: readYAML
No description
Block: cfg
The Cfg Library
Standard comes packages with some Nixago Pebbles for easy downstream re-use.
Some Pebbles may have a special integration for std
.
For example, the conform
Pebble can undestand inputs.cells
and add each Cell as a so called “scope” to its
Conventional Commit configuration.
If you’re rather looking for Nixago Presets (i.e. pebbles that already have an opinionated default), please refer to the nixago presets, instead.
Target: adrgen
No description
adrgen
adrgen
is a great tool to manage Architecture Decision Records.
Definition:
let
inherit (inputs) nixpkgs;
in {
data = {};
output = "adrgen.config.yml";
format = "yaml";
commands = [{package = nixpkgs.adrgen;}];
}
Target: conform
No description
conform
Conform your code to policies, e.g. in a pre-commit hook.
This version is wrapped, it can auto-enhance the conventional
commit scopes with your cells
as follows:
{ inputs, cell}: let
inherit (inputs.std) lib;
in {
default = lib.dev.mkShell {
/* ... */
nixago = [
(lib.cfg.conform {data = {inherit (inputs) cells;};})
];
};
}
Definition:
let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
in {
data = {};
format = "yaml";
output = ".conform.yaml";
packages = [nixpkgs.conform];
apply = d: {
policies =
[]
++ (l.optional (d ? commit) {
type = "commit";
spec =
d.commit
// l.optionalAttrs (d ? cells) {
conventional =
d.commit.conventional
// {
scopes =
d.commit.conventional.scopes
++ (l.subtractLists l.systems.doubles.all (l.attrNames d.cells));
};
};
})
++ (l.optional (d ? license) {
type = "license";
spec = d.license;
});
};
}
Target: editorconfig
No description
editorconfig
Most editors understand the .editorconfig
file and autoconfigure themselves accordingly.
Definition:
let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
in {
data = {};
output = ".editorconfig";
engine = request: let
inherit (request) data output;
name = l.baseNameOf output;
value = {
globalSection = {root = data.root or true;};
sections = l.removeAttrs data ["root"];
};
in
nixpkgs.writeText name (l.generators.toINIWithGlobalSection {} value);
packages = [nixpkgs.editorconfig-checker];
}
Target: githubsettings
No description
githubsettings
Syncs repository settings defined in .github/settings.yml
to GitHub, enabling Pull Requests for repository settings.
In order to use this, you also need to install Github Settings App. Please see the App’s Homepage for the configuration schema.
Definition:
{
data = {};
output = ".github/settings.yml";
format = "yaml";
hook.mode = "copy"; # let the Github Settings action pick it up outside of devshell
}
Target: just
No description
just
Just is a general purpose command runner with syntax inspired by make
.
Tasks are configured via an attribute set where the name is the name of the task
(i.e. just <task>
) and the value is the task definition (see below for an
example). The generated Justfile
should be committed to allow non-Nix users to
on-ramp without needing access to Nix.
Task dependencies (i.e. treefmt
below) should be included in packages
and
will automatically be picked up in the devshell.
{ inputs, cell }:
let
inherit (inputs) nixpkgs;
inherit (inputs.std) lib;
in
{
default = lib.dev.mkShell {
/* ... */
nixago = [
(lib.cfg.just {
packages = [ nixpkgs.treefmt ];
data = {
tasks = {
fmt = {
description = "Formats all changed source files";
content = ''
treefmt $(git diff --name-only --cached)
'';
};
};
};
})
];
};
}
It’s also possible to override the interpreter for a task:
{
# ...
hello = {
description = "Prints hello world";
interpreter = nixpkgs.python3;
content = ''
print("Hello, world!")
'';
};
}
# ...
Definition:
let
inherit (inputs) nixpkgs;
l = nixpkgs.lib // builtins;
in {
data = {};
apply = d: let
# Transforms interpreter attribute if present
# nixpkgs.pkgname -> nixpkgs.pkgname + '/bin/<name>'
getExe = x: "${l.getBin x}/bin/${x.meta.mainProgram or (l.getName x)}";
final =
d
// {
tasks =
l.mapAttrs
(n: v:
v // l.optionalAttrs (v ? interpreter) {interpreter = getExe v.interpreter;})
d.tasks;
};
in {
data = final; # CUE expects structure to be wrapped with "data"
};
format = "text";
output = "Justfile";
packages = [nixpkgs.just];
hook = {
mode = "copy";
};
engine = inputs.nixago.engines.cue {
files = [./just.cue];
flags = {
expression = "rendered";
out = "text";
};
postHook = ''
${l.getExe nixpkgs.just} --unstable --fmt -f $out
'';
};
}
Target: lefthook
No description
lefthook
Lefthook is a fast (parallel execution) and elegant git hook manager.
Definition:
let
inherit (inputs) nixpkgs;
lib = nixpkgs.lib // builtins;
mkScript = stage:
nixpkgs.writeScript "lefthook-${stage}" ''
#!${nixpkgs.runtimeShell}
[ "$LEFTHOOK" == "0" ] || ${lib.getExe nixpkgs.lefthook} run "${stage}" "$@"
'';
toStagesConfig = config:
lib.removeAttrs config [
"colors"
"extends"
"skip_output"
"source_dir"
"source_dir_local"
];
in {
data = {};
format = "yaml";
output = "lefthook.yml";
packages = [nixpkgs.lefthook];
# Add an extra hook for adding required stages whenever the file changes
hook.extra = config:
lib.pipe config [
toStagesConfig
lib.attrNames
(lib.map (stage: ''ln -sf "${mkScript stage}" ".git/hooks/${stage}"''))
(stages:
lib.optional (stages != []) "mkdir -p .git/hooks"
++ stages)
(lib.concatStringsSep "\n")
];
}
Target: mdbook
No description
mdbook
Write clean docs for humans with mdbook
.
This version comes preset with this gem to make any
Solution Architect extra happy: mdbook-kroki-preprocessor
Definition:
let
inherit (inputs) nixpkgs;
in {
data = {};
output = "book.toml";
format = "toml";
hook.extra = d: let
sentinel = "nixago-auto-created: mdbook-build-folder";
file = ".gitignore";
str = ''
# ${sentinel}
${d.build.build-dir or "book"}/**
'';
in ''
# Configure gitignore
create() {
echo -n "${str}" > "${file}"
}
append() {
echo -en "\n${str}" >> "${file}"
}
if ! test -f "${file}"; then
create
elif ! grep -qF "${sentinel}" "${file}"; then
append
fi
'';
commands = [{package = nixpkgs.mdbook;}];
}
Target: treefmt
No description
treefmt
A code-tree formatter to fromat the entire code tree extremly fast (in parallel and with a smart cache).
Definition:
let
inherit (inputs) nixpkgs;
in {
data = {};
output = "treefmt.toml";
format = "toml";
commands = [{package = nixpkgs.treefmt;}];
}
Autogenerated documentation from ./src/std/*
.
Cell: std
The std
Cell
… is the only cell in divnix/std
and provides only very limited functionality.
- It contains the TUI, in
./cli
. - It contains a
devshellProfile
in./devshellProfiles
. - It contains a growing number of second level library functions in
./lib
. - Packages that are used in std devshells are proxied in
./packages
.
That’s it.
Block: cli
Block: devshellProfiles
std
’s devshellProfiles
This Cell Block only exports a single default
devshellProfile.
Any std
ized repository should include this into its numtide/devshell
in order to provide any visitor with the fully pre-configured std
TUI.
It also wires & instantiates a decent ADR tool. Or were you planning to hack away without some minimal conscious effort of decision making and recording? 😅
Usage Example
# ./nix/local/shells.nix
{
inputs,
cell,
}: let
l = nixpkgs.lib // builtins;
inherit (inputs) nixpkgs;
inherit (inputs.std) std;
in
l.mapAttrs (_: std.lib.mkShell) {
# `default` is a special target in newer nix versions
# see: harvesting below
default = {
name = "My Devshell";
# make `std` available in the numtide/devshell
imports = [ std.devshellProfiles.default ];
};
}
# ./flake.nix
{
inputs.std.url = "github:divnix/std";
outputs = inputs:
inputs.std.growOn {
inherit inputs;
cellsFrom = ./nix;
cellBlocks = [
/* ... */
(inputs.std.blockTypes.devshells "shells")
];
}
# soil for compatiblity ...
{
# ... with `nix develop` - `default` is a special target for `nix develop`
devShells = inputs.std.harvest inputs.self ["local" "shells"];
};
}
Block: errors
Error Message Functions
This Cell Block comprises several error message functions that can be used in different situations.
Target: removeBy
No description
removeBy
{inputs}: time: body: let
l = inputs.nixpkgs.lib // builtins;
ansi = import ./ansi.nix;
pad = s: let
n = 17;
prefix = l.concatStringsSep "" (l.genList (_: " ") (n - (l.stringLength s)));
in
prefix + s;
indent = s: let
n = 5;
prefix = l.concatStringsSep "" (l.genList (_: " ") n);
lines = l.splitString "\n" s;
in
" 📝 │ " + (l.concatStringsSep "\n${prefix}│ " lines);
warn = let
apply =
l.replaceStrings
(map (key: "{${key}}") (l.attrNames ansi))
(l.attrValues ansi);
in
msg:
l.trace (apply "🔥 {bold}{196}Standard Deprecation Notices - {220}run `std check' to show!{reset}")
l.traceVerbose (apply "\n{202}${msg}{reset}");
in
warn ''
─────┬─────────────────────────────────────────────────────────────────────────
💪 │ {bold}Action Required !{un-bold}
─────┼─────────────────────────────────────────────────────────────────────────
{italic}${indent body}{un-italic}
─────┼─────────────────────────────────────────────────────────────────────────
📅 │ {bold}Scheduled Removal: ${pad time}{un-bold}
─────┴─────────────────────────────────────────────────────────────────────────
''
Target: requireInput
No description
requireInput
{inputs}: input: url: target: let
l = inputs.nixpkgs.lib // builtins;
# other than `divnix/blank`
isBlank = input: inputs.${input}.narHash == "sha256-O8/MWsPBGhhyPoPLHZAuoZiiHo9q6FLlEeIDEXuj6T4=";
ansi = import ./ansi.nix;
pad = n: s: let
prefix = l.concatStringsSep "" (l.genList (_: " ") n);
in
prefix + s;
indent = s: let
n = 5;
prefix = l.concatStringsSep "" (l.genList (_: " ") n);
lines = l.splitString "\n" s;
in
l.concatStringsSep "\n${prefix}│ " lines;
warn = let
apply =
l.replaceStrings
(map (key: "{${key}}") (l.attrNames ansi))
(l.attrValues ansi);
in
msg: l.trace (apply "🚀 {bold}{200}Standard Input Overloading{reset}${msg}") "";
body = ''
In order to use ${target}, add to {bold}flake.nix{un-bold}:
inputs.std.inputs.${input}.url = "${url}";
'';
inputs' = let
names = l.attrNames (l.removeAttrs inputs ["self" "cells" "blank" "nixpkgs"]);
nameLengths = map l.stringLength names;
maxNameLength =
l.foldl'
(max: v:
if v > max
then v
else max)
0
nameLengths;
lines =
l.map (
name: "- ${name}${
if isBlank name
then pad (maxNameLength - (l.stringLength name)) " | blanked out"
else ""
}"
)
names;
in
"Declared Inputs:\n" + (l.concatStringsSep "\n" lines);
in
assert l.assertMsg (! (isBlank input)) (warn ''
─────┬─────────────────────────────────────────────────────────────────────────
🏗️ │ {bold}Input Overloading for ${target}{un-bold}
─────┼─────────────────────────────────────────────────────────────────────────
📝 │ {italic}${indent body}{un-italic}
─────┼─────────────────────────────────────────────────────────────────────────
🙋 │ ${indent inputs'}
─────┴─────────────────────────────────────────────────────────────────────────
''); inputs
Block: templates
Nix Templates
These are opinionated template projects designed to get you kick-started.
You can make use of them through the Nix CLI, via:
cd my-new-project
nix flake init -t github:divnix/std#<template-name>
Please consult the template section in the docs for an overview.
Target: minimal
No description
A minimal project template with docs!
Included Configuration
devshell
for your contribution environments!treefmt
for formatting all the things!mdbook
for making documentation part of your workflow!lefthook
for commit discipline and a clean history!- GitHub Setting App for configuring GitHub declaratively!
Bootstrap
# make a new empty project dir
mkdir my-project
cd my-project
# grab the template
nix flake init -t github:divnix/std#minimal
# see which values to change
grep -r --include=\*.nix 'CONFIGURE-ME' .
# do some inititialization
git init && git add .
# enter the devshell and effectuate repo configuration
direnv allow
git add . && git commit -m "feat: initial commit"
Target: rust
No description
Standard, and Nix and Rust, oh my!
This template uses Nix to create a sane development shell for Rust projects, Standard for keeping your Nix code well organized, Fenix for pulling the latest rust binaries via Nix, and Crane for building Rust projects in Nix incrementally, making quick iteration a breeze.
Rust Analyzer is also wired up properly for immediate use from a terminal based editor with language server support. Need one with stellar Nix and Rust support? Try Helix!
Bootstrap
# make a new empty project dir
mkdir my-project
cd my-project
# grab the template
nix flake init -t github:divnix/std#rust
# do some inititialization
git init && git add .
# enter the devshell
direnv allow || nix develop
# continue some inititialization
cargo init # pass --lib for library projects
cargo build # to generate Cargo.lock
git add . && git commit -m "init"
Glossary
Cell
: A Cell is the folder name of the first level under ${cellsFrom}
. They represent a coherent semantic collection of functionality.
Cell Block
: A Cell Block is the specific named type of a Standard (and hence: Flake) output.
Block Type
: A Block Type is the unnamed generic type of a Cell Block and may or may not implement Block Type Actions.
Target
: A Target is the actual output of a Cell Block. If there is only one intended output, it is called default
by convention.
Action
: An Action is a runnable procedure implemented on the generic Block Type type. These are abstract procedures that are valuable in any concrete Cell Block of such Block Type.
The Registry
: The Registry, in the context of Standard and if it doesn't refer to a well-known external concept, means the .#__std
flake output. This Registry holds different Registers that serve different discovery purposes. For example, the CLI can discover relevant metadata or a CI can discover desired pipeline targets.